Oct 06 07:16:42 crc systemd[1]: Starting Kubernetes Kubelet... Oct 06 07:16:42 crc restorecon[4681]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:42 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 07:16:43 crc restorecon[4681]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 06 07:16:43 crc restorecon[4681]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 06 07:16:43 crc kubenswrapper[4769]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 07:16:43 crc kubenswrapper[4769]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 06 07:16:43 crc kubenswrapper[4769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 07:16:43 crc kubenswrapper[4769]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 07:16:43 crc kubenswrapper[4769]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 06 07:16:43 crc kubenswrapper[4769]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.927349 4769 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.938933 4769 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.938963 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.938973 4769 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.938981 4769 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.938990 4769 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.938998 4769 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939006 4769 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939017 4769 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939028 4769 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939036 4769 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939045 4769 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939052 4769 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939060 4769 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939067 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939075 4769 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939084 4769 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939091 4769 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939099 4769 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939107 4769 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939114 4769 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939123 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939130 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939138 4769 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939146 4769 feature_gate.go:330] unrecognized feature gate: Example Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939153 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939162 4769 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939189 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939197 4769 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939205 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939213 4769 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939221 4769 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939229 4769 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939237 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939244 4769 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939252 4769 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939259 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939267 4769 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939275 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939285 4769 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939295 4769 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939304 4769 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939313 4769 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939321 4769 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939331 4769 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939340 4769 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939347 4769 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939356 4769 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939366 4769 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939376 4769 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939384 4769 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939392 4769 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939401 4769 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939409 4769 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939418 4769 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939461 4769 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939473 4769 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939482 4769 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939490 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939499 4769 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939507 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939515 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939523 4769 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939531 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939540 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939547 4769 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939555 4769 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939562 4769 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939570 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939578 4769 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939586 4769 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.939593 4769 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939805 4769 flags.go:64] FLAG: --address="0.0.0.0" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939824 4769 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939841 4769 flags.go:64] FLAG: --anonymous-auth="true" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939853 4769 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939865 4769 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939874 4769 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939887 4769 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939900 4769 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939909 4769 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939918 4769 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939928 4769 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939938 4769 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939948 4769 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939957 4769 flags.go:64] FLAG: --cgroup-root="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939966 4769 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939975 4769 flags.go:64] FLAG: --client-ca-file="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939985 4769 flags.go:64] FLAG: --cloud-config="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.939994 4769 flags.go:64] FLAG: --cloud-provider="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940003 4769 flags.go:64] FLAG: --cluster-dns="[]" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940014 4769 flags.go:64] FLAG: --cluster-domain="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940023 4769 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940032 4769 flags.go:64] FLAG: --config-dir="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940041 4769 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940051 4769 flags.go:64] FLAG: --container-log-max-files="5" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940063 4769 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940073 4769 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940082 4769 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940092 4769 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940102 4769 flags.go:64] FLAG: --contention-profiling="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940111 4769 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940120 4769 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940129 4769 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940138 4769 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940149 4769 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940158 4769 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940167 4769 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940176 4769 flags.go:64] FLAG: --enable-load-reader="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940186 4769 flags.go:64] FLAG: --enable-server="true" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940217 4769 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940234 4769 flags.go:64] FLAG: --event-burst="100" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940245 4769 flags.go:64] FLAG: --event-qps="50" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940254 4769 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940264 4769 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940274 4769 flags.go:64] FLAG: --eviction-hard="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940294 4769 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940303 4769 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940314 4769 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940323 4769 flags.go:64] FLAG: --eviction-soft="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940334 4769 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940344 4769 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940353 4769 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940362 4769 flags.go:64] FLAG: --experimental-mounter-path="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940371 4769 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940380 4769 flags.go:64] FLAG: --fail-swap-on="true" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940389 4769 flags.go:64] FLAG: --feature-gates="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940400 4769 flags.go:64] FLAG: --file-check-frequency="20s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940409 4769 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940444 4769 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940455 4769 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940465 4769 flags.go:64] FLAG: --healthz-port="10248" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940474 4769 flags.go:64] FLAG: --help="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940484 4769 flags.go:64] FLAG: --hostname-override="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940493 4769 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940502 4769 flags.go:64] FLAG: --http-check-frequency="20s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940512 4769 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940522 4769 flags.go:64] FLAG: --image-credential-provider-config="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940530 4769 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940540 4769 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940548 4769 flags.go:64] FLAG: --image-service-endpoint="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940557 4769 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940567 4769 flags.go:64] FLAG: --kube-api-burst="100" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940576 4769 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940586 4769 flags.go:64] FLAG: --kube-api-qps="50" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940595 4769 flags.go:64] FLAG: --kube-reserved="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940604 4769 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940612 4769 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940622 4769 flags.go:64] FLAG: --kubelet-cgroups="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940631 4769 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940640 4769 flags.go:64] FLAG: --lock-file="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940683 4769 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940694 4769 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940703 4769 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940718 4769 flags.go:64] FLAG: --log-json-split-stream="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940727 4769 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940736 4769 flags.go:64] FLAG: --log-text-split-stream="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940746 4769 flags.go:64] FLAG: --logging-format="text" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940755 4769 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940765 4769 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940774 4769 flags.go:64] FLAG: --manifest-url="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940783 4769 flags.go:64] FLAG: --manifest-url-header="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940795 4769 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940805 4769 flags.go:64] FLAG: --max-open-files="1000000" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940817 4769 flags.go:64] FLAG: --max-pods="110" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940826 4769 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940835 4769 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940844 4769 flags.go:64] FLAG: --memory-manager-policy="None" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940853 4769 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940863 4769 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940872 4769 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940882 4769 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940902 4769 flags.go:64] FLAG: --node-status-max-images="50" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940912 4769 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940921 4769 flags.go:64] FLAG: --oom-score-adj="-999" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940959 4769 flags.go:64] FLAG: --pod-cidr="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940968 4769 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940983 4769 flags.go:64] FLAG: --pod-manifest-path="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.940993 4769 flags.go:64] FLAG: --pod-max-pids="-1" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941002 4769 flags.go:64] FLAG: --pods-per-core="0" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941011 4769 flags.go:64] FLAG: --port="10250" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941020 4769 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941029 4769 flags.go:64] FLAG: --provider-id="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941039 4769 flags.go:64] FLAG: --qos-reserved="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941048 4769 flags.go:64] FLAG: --read-only-port="10255" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941058 4769 flags.go:64] FLAG: --register-node="true" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941067 4769 flags.go:64] FLAG: --register-schedulable="true" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941077 4769 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941094 4769 flags.go:64] FLAG: --registry-burst="10" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941103 4769 flags.go:64] FLAG: --registry-qps="5" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941112 4769 flags.go:64] FLAG: --reserved-cpus="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941121 4769 flags.go:64] FLAG: --reserved-memory="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941132 4769 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941142 4769 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941151 4769 flags.go:64] FLAG: --rotate-certificates="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941160 4769 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941169 4769 flags.go:64] FLAG: --runonce="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941178 4769 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941188 4769 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941198 4769 flags.go:64] FLAG: --seccomp-default="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941206 4769 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941215 4769 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941225 4769 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941234 4769 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941244 4769 flags.go:64] FLAG: --storage-driver-password="root" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941253 4769 flags.go:64] FLAG: --storage-driver-secure="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941263 4769 flags.go:64] FLAG: --storage-driver-table="stats" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941272 4769 flags.go:64] FLAG: --storage-driver-user="root" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941281 4769 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941291 4769 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941300 4769 flags.go:64] FLAG: --system-cgroups="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941309 4769 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941322 4769 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941331 4769 flags.go:64] FLAG: --tls-cert-file="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941340 4769 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941351 4769 flags.go:64] FLAG: --tls-min-version="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941360 4769 flags.go:64] FLAG: --tls-private-key-file="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941370 4769 flags.go:64] FLAG: --topology-manager-policy="none" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941379 4769 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941388 4769 flags.go:64] FLAG: --topology-manager-scope="container" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941398 4769 flags.go:64] FLAG: --v="2" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941409 4769 flags.go:64] FLAG: --version="false" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941444 4769 flags.go:64] FLAG: --vmodule="" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941456 4769 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.941466 4769 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941693 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941703 4769 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941712 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941720 4769 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941728 4769 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941737 4769 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941746 4769 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941757 4769 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941767 4769 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941775 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941784 4769 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941792 4769 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941800 4769 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941808 4769 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941815 4769 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941823 4769 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941831 4769 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941839 4769 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941846 4769 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941854 4769 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941862 4769 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941870 4769 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941877 4769 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941890 4769 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941898 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941907 4769 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941915 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941922 4769 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941930 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941937 4769 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941945 4769 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941953 4769 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941961 4769 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941969 4769 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941977 4769 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941985 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.941993 4769 feature_gate.go:330] unrecognized feature gate: Example Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942002 4769 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942012 4769 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942021 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942029 4769 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942037 4769 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942045 4769 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942055 4769 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942065 4769 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942073 4769 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942081 4769 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942090 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942097 4769 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942105 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942113 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942120 4769 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942128 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942136 4769 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942144 4769 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942154 4769 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942162 4769 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942170 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942177 4769 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942185 4769 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942193 4769 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942203 4769 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942213 4769 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942221 4769 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942230 4769 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942238 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942247 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942255 4769 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942263 4769 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942271 4769 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.942280 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.942293 4769 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.953984 4769 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.954064 4769 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954176 4769 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954188 4769 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954203 4769 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954211 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954218 4769 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954223 4769 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954228 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954234 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954239 4769 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954244 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954250 4769 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954255 4769 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954261 4769 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954266 4769 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954271 4769 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954276 4769 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954281 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954287 4769 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954292 4769 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954297 4769 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954303 4769 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954308 4769 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954313 4769 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954318 4769 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954323 4769 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954329 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954334 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954339 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954344 4769 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954349 4769 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954355 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954360 4769 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954366 4769 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954371 4769 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954385 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954391 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954396 4769 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954401 4769 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954406 4769 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954412 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954435 4769 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954441 4769 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954446 4769 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954451 4769 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954458 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954463 4769 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954469 4769 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954474 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954481 4769 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954486 4769 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954491 4769 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954497 4769 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954502 4769 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954507 4769 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954512 4769 feature_gate.go:330] unrecognized feature gate: Example Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954517 4769 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954524 4769 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954531 4769 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954539 4769 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954545 4769 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954551 4769 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954556 4769 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954562 4769 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954567 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954572 4769 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954577 4769 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954583 4769 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954588 4769 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954593 4769 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954631 4769 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954647 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.954657 4769 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954834 4769 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954844 4769 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954851 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954856 4769 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954862 4769 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954867 4769 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954873 4769 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954878 4769 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954883 4769 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954890 4769 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954895 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954900 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954906 4769 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954911 4769 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954916 4769 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954921 4769 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954926 4769 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954931 4769 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954937 4769 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954942 4769 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954948 4769 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954956 4769 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954963 4769 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954970 4769 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954976 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954983 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954990 4769 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.954996 4769 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955003 4769 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955009 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955015 4769 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955021 4769 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955027 4769 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955032 4769 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955039 4769 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955044 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955049 4769 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955055 4769 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955060 4769 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955065 4769 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955070 4769 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955076 4769 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955081 4769 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955087 4769 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955092 4769 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955097 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955102 4769 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955107 4769 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955113 4769 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955118 4769 feature_gate.go:330] unrecognized feature gate: Example Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955123 4769 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955128 4769 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955133 4769 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955138 4769 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955143 4769 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955149 4769 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955154 4769 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955160 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955165 4769 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955170 4769 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955175 4769 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955180 4769 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955185 4769 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955191 4769 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955196 4769 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955201 4769 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955206 4769 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955211 4769 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955217 4769 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955222 4769 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 06 07:16:43 crc kubenswrapper[4769]: W1006 07:16:43.955229 4769 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.955239 4769 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.956144 4769 server.go:940] "Client rotation is on, will bootstrap in background" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.961108 4769 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.961223 4769 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.964620 4769 server.go:997] "Starting client certificate rotation" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.964692 4769 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.965032 4769 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-21 22:55:42.570859301 +0000 UTC Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.965176 4769 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1119h38m58.605687856s for next certificate rotation Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.989189 4769 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 06 07:16:43 crc kubenswrapper[4769]: I1006 07:16:43.993453 4769 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.014231 4769 log.go:25] "Validated CRI v1 runtime API" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.047720 4769 log.go:25] "Validated CRI v1 image API" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.051042 4769 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.058885 4769 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-06-07-11-58-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.058922 4769 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.080439 4769 manager.go:217] Machine: {Timestamp:2025-10-06 07:16:44.075910445 +0000 UTC m=+0.600191632 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f BootID:cdc8e157-8825-4eb6-bd1e-19bb6087ad55 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5f:6c:0d Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5f:6c:0d Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d6:5e:ca Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ab:27:b7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:b5:d1:28 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:66:6f:a4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b2:c6:b7:8a:4c:5f Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:32:b1:9d:6e:4b:e2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.080727 4769 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.080880 4769 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.081488 4769 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.081757 4769 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.081809 4769 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.082068 4769 topology_manager.go:138] "Creating topology manager with none policy" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.082082 4769 container_manager_linux.go:303] "Creating device plugin manager" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.082949 4769 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.082988 4769 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.083993 4769 state_mem.go:36] "Initialized new in-memory state store" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.084105 4769 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.089055 4769 kubelet.go:418] "Attempting to sync node with API server" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.089092 4769 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.089132 4769 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.089157 4769 kubelet.go:324] "Adding apiserver pod source" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.089175 4769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.093275 4769 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 06 07:16:44 crc kubenswrapper[4769]: W1006 07:16:44.094647 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.094748 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.094965 4769 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 06 07:16:44 crc kubenswrapper[4769]: W1006 07:16:44.094959 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.095093 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.098258 4769 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.099823 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.099854 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.099864 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.099874 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.099888 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.099898 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.099909 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.099924 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.099934 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.099944 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.099974 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.099983 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.103814 4769 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.104552 4769 server.go:1280] "Started kubelet" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.104612 4769 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.105456 4769 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.105456 4769 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.105983 4769 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 06 07:16:44 crc systemd[1]: Started Kubernetes Kubelet. Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.108056 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.108413 4769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.108844 4769 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.108862 4769 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.108945 4769 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.108492 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:40:15.677810483 +0000 UTC Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.109230 4769 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 938h23m31.56859093s for next certificate rotation Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.109256 4769 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.110200 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="200ms" Oct 06 07:16:44 crc kubenswrapper[4769]: W1006 07:16:44.110234 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.110341 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.110629 4769 server.go:460] "Adding debug handlers to kubelet server" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.110962 4769 factory.go:55] Registering systemd factory Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.110999 4769 factory.go:221] Registration of the systemd container factory successfully Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.112490 4769 factory.go:153] Registering CRI-O factory Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.112536 4769 factory.go:221] Registration of the crio container factory successfully Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.112671 4769 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.112717 4769 factory.go:103] Registering Raw factory Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.112742 4769 manager.go:1196] Started watching for new ooms in manager Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.113773 4769 manager.go:319] Starting recovery of all containers Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.115111 4769 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.5:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186bd5a9aa786ebb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-06 07:16:44.104519355 +0000 UTC m=+0.628800502,LastTimestamp:2025-10-06 07:16:44.104519355 +0000 UTC m=+0.628800502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125120 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125170 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125184 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125196 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125207 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125217 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125229 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125264 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125277 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125288 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125300 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125312 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125324 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125338 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125352 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125364 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125377 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125389 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125401 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125412 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125443 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125458 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125490 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125507 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125520 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125534 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125550 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125564 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125577 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125591 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125607 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125622 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125636 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125647 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125658 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125669 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125681 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125692 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125703 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125715 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125727 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125737 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125749 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125762 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125773 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125784 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125796 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125809 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125823 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125836 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125848 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125860 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125879 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125892 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125905 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125917 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125930 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125941 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125953 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125965 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125976 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.125987 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126002 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126014 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126025 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126036 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126048 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126059 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126072 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126084 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126096 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126108 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126120 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126132 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126143 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126154 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126164 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126177 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126189 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126201 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126212 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126223 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126234 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126245 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126295 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126306 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126318 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126329 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126340 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126352 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126363 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126374 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126385 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126399 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126412 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126438 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126449 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126460 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126471 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126482 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126493 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126504 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126516 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126528 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126544 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126558 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.126571 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.130477 4769 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.130590 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.130681 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.130759 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.130828 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.130915 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.130991 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131088 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131183 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131261 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131333 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131407 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131503 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131575 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131633 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131690 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131760 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131816 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131876 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131934 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.131988 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.132045 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.132100 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.132160 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.132217 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.132273 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.132331 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.132389 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.132469 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.132684 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.132770 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.132852 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.132931 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.133006 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.133131 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.133212 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.133292 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.133374 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.133478 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.133577 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.133665 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.133747 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.133822 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.133893 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.133964 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134031 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134106 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134162 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134220 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134278 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134342 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134403 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134499 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134561 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134616 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134685 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134740 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134793 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134852 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134907 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.134962 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135020 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135076 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135130 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135184 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135245 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135350 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135408 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135491 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135555 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135623 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135694 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135751 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135806 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135860 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135915 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.135975 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136032 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136087 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136144 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136201 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136264 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136329 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136391 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136470 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136540 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136622 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136687 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136744 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136801 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136859 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136891 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136906 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136916 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136926 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136940 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136954 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136968 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136980 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.136993 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.137006 4769 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.137018 4769 reconstruct.go:97] "Volume reconstruction finished" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.137044 4769 reconciler.go:26] "Reconciler: start to sync state" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.144362 4769 manager.go:324] Recovery completed Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.153733 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.156878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.156917 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.156926 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.158144 4769 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.158164 4769 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.158193 4769 state_mem.go:36] "Initialized new in-memory state store" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.162900 4769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.164603 4769 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.164657 4769 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.164696 4769 kubelet.go:2335] "Starting kubelet main sync loop" Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.164822 4769 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 06 07:16:44 crc kubenswrapper[4769]: W1006 07:16:44.165607 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.165667 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.173280 4769 policy_none.go:49] "None policy: Start" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.173912 4769 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.173947 4769 state_mem.go:35] "Initializing new in-memory state store" Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.209053 4769 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.221651 4769 manager.go:334] "Starting Device Plugin manager" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.221923 4769 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.221945 4769 server.go:79] "Starting device plugin registration server" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.222627 4769 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.222648 4769 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.222970 4769 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.223075 4769 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.223091 4769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.235751 4769 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.265660 4769 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.265816 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.267845 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.267887 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.267900 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.268137 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.268450 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.268552 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.269300 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.269367 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.269382 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.269643 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.269787 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.269833 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.269795 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.269920 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.269939 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.270838 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.270882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.270898 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.270966 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.270989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.271000 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.271141 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.271172 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.271173 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.272241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.272278 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.272291 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.272317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.272393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.272409 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.272504 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.272718 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.272765 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.273531 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.273573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.273588 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.273531 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.273678 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.273696 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.273779 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.273816 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.274701 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.274735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.274749 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.311450 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="400ms" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.325060 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.329532 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.330053 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.330065 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.330093 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.330587 4769 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.338783 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.338887 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.338937 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.338979 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.339018 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.339051 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.339089 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.339125 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.339163 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.339195 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.339273 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.339322 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.339360 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.339395 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.339466 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.440737 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.440810 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.440855 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.440905 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.440916 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.440958 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.440914 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441033 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441063 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441113 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441122 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441076 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441229 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441255 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441309 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441328 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441334 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441358 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441373 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441384 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441398 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441406 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441444 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441310 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441470 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441486 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441499 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441505 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.441595 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.531632 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.532944 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.532983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.532996 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.533025 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.533571 4769 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.609728 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.624209 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.645601 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.658088 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: W1006 07:16:44.663612 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7c71e6c1b1232c1967aeebb7b79b470efbc35e93c66a2e9c71b8c3b4f55e6b50 WatchSource:0}: Error finding container 7c71e6c1b1232c1967aeebb7b79b470efbc35e93c66a2e9c71b8c3b4f55e6b50: Status 404 returned error can't find the container with id 7c71e6c1b1232c1967aeebb7b79b470efbc35e93c66a2e9c71b8c3b4f55e6b50 Oct 06 07:16:44 crc kubenswrapper[4769]: W1006 07:16:44.666001 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c2b2f1d24464ba3f8171ddcd10762caabb17203bbfc9258b47711fd9093cb472 WatchSource:0}: Error finding container c2b2f1d24464ba3f8171ddcd10762caabb17203bbfc9258b47711fd9093cb472: Status 404 returned error can't find the container with id c2b2f1d24464ba3f8171ddcd10762caabb17203bbfc9258b47711fd9093cb472 Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.667223 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:44 crc kubenswrapper[4769]: W1006 07:16:44.686388 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1a91df2ac065b59b1dd83b50c786492f0cb22d148e1fcae6d9ca4399e3a7fed4 WatchSource:0}: Error finding container 1a91df2ac065b59b1dd83b50c786492f0cb22d148e1fcae6d9ca4399e3a7fed4: Status 404 returned error can't find the container with id 1a91df2ac065b59b1dd83b50c786492f0cb22d148e1fcae6d9ca4399e3a7fed4 Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.713032 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="800ms" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.934460 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.936182 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.936261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.936282 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:44 crc kubenswrapper[4769]: I1006 07:16:44.936323 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 07:16:44 crc kubenswrapper[4769]: E1006 07:16:44.936939 4769 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Oct 06 07:16:45 crc kubenswrapper[4769]: I1006 07:16:45.106316 4769 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:45 crc kubenswrapper[4769]: I1006 07:16:45.170288 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7c71e6c1b1232c1967aeebb7b79b470efbc35e93c66a2e9c71b8c3b4f55e6b50"} Oct 06 07:16:45 crc kubenswrapper[4769]: I1006 07:16:45.171347 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c2b2f1d24464ba3f8171ddcd10762caabb17203bbfc9258b47711fd9093cb472"} Oct 06 07:16:45 crc kubenswrapper[4769]: I1006 07:16:45.172520 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1a91df2ac065b59b1dd83b50c786492f0cb22d148e1fcae6d9ca4399e3a7fed4"} Oct 06 07:16:45 crc kubenswrapper[4769]: I1006 07:16:45.173755 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"613f38e2f2a717b5b5dc9e8a7e2da0524c7508773f3767dc0a4e69dd70ba845f"} Oct 06 07:16:45 crc kubenswrapper[4769]: I1006 07:16:45.174847 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b9628b6e27c7ccacf34c4cfd9afedf51756bc6e5d747a6db587c278d881a98fc"} Oct 06 07:16:45 crc kubenswrapper[4769]: W1006 07:16:45.283084 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:45 crc kubenswrapper[4769]: E1006 07:16:45.283195 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Oct 06 07:16:45 crc kubenswrapper[4769]: W1006 07:16:45.432991 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:45 crc kubenswrapper[4769]: E1006 07:16:45.433103 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Oct 06 07:16:45 crc kubenswrapper[4769]: E1006 07:16:45.513711 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="1.6s" Oct 06 07:16:45 crc kubenswrapper[4769]: W1006 07:16:45.520369 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:45 crc kubenswrapper[4769]: E1006 07:16:45.520470 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Oct 06 07:16:45 crc kubenswrapper[4769]: W1006 07:16:45.642378 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:45 crc kubenswrapper[4769]: E1006 07:16:45.642491 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Oct 06 07:16:45 crc kubenswrapper[4769]: I1006 07:16:45.737115 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:45 crc kubenswrapper[4769]: I1006 07:16:45.738661 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:45 crc kubenswrapper[4769]: I1006 07:16:45.738687 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:45 crc kubenswrapper[4769]: I1006 07:16:45.738695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:45 crc kubenswrapper[4769]: I1006 07:16:45.738718 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 07:16:45 crc kubenswrapper[4769]: E1006 07:16:45.739217 4769 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.105682 4769 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.180067 4769 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533" exitCode=0 Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.180156 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533"} Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.180233 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.181273 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.181302 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.181311 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.183683 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36"} Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.183737 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308"} Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.183758 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473"} Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.183786 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1"} Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.183739 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.184726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.184762 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.184772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.186911 4769 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4" exitCode=0 Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.186993 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4"} Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.187046 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.187886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.187930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.187942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.189293 4769 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a4917362a1084a6349600f0f6478289ab518296d52facbc042a709cb1032453d" exitCode=0 Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.189370 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a4917362a1084a6349600f0f6478289ab518296d52facbc042a709cb1032453d"} Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.189445 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.190331 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.190367 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.190382 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.191020 4769 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde" exitCode=0 Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.191064 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde"} Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.191121 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.191842 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.192049 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.192079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.192090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.192648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.192668 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.192682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.293642 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:46 crc kubenswrapper[4769]: I1006 07:16:46.639805 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.105937 4769 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:47 crc kubenswrapper[4769]: E1006 07:16:47.114780 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.5:6443: connect: connection refused" interval="3.2s" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.196184 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f"} Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.196235 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096"} Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.196247 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d"} Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.196257 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c"} Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.196265 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753"} Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.196310 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.197180 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.197217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.197229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.197743 4769 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="18cdc6d37b60bbc47edb98b88b7b897e3762f0d228a69bfe2ea52de65796b5e3" exitCode=0 Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.197835 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.197831 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"18cdc6d37b60bbc47edb98b88b7b897e3762f0d228a69bfe2ea52de65796b5e3"} Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.198664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.198690 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.198699 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.199474 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1be61ede3642b697eb5935aa7fc86cf6b91fb82c20e6ad664ded64cd1e044d11"} Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.199527 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.200169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.200202 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.200213 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.201847 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66"} Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.201870 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd"} Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.201881 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f"} Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.201893 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.201896 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.202567 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.202596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.202611 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.202867 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.202899 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.202908 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:47 crc kubenswrapper[4769]: W1006 07:16:47.279851 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:47 crc kubenswrapper[4769]: E1006 07:16:47.280867 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.339318 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.343807 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.343858 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.343872 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.343906 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 07:16:47 crc kubenswrapper[4769]: E1006 07:16:47.344560 4769 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.5:6443: connect: connection refused" node="crc" Oct 06 07:16:47 crc kubenswrapper[4769]: W1006 07:16:47.359227 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.5:6443: connect: connection refused Oct 06 07:16:47 crc kubenswrapper[4769]: E1006 07:16:47.359483 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.5:6443: connect: connection refused" logger="UnhandledError" Oct 06 07:16:47 crc kubenswrapper[4769]: I1006 07:16:47.772572 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.208180 4769 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6fe9b94876858ba9442aa067dbed60ae53cc276c25e784cea2eced361a1564c7" exitCode=0 Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.209163 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.210077 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.210940 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6fe9b94876858ba9442aa067dbed60ae53cc276c25e784cea2eced361a1564c7"} Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.211481 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.211316 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.211713 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.211762 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.211772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.211659 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.212414 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.212441 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.211404 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.211263 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.213013 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.213618 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.213643 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.213653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.214245 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.214286 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.214294 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.215393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.215414 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:48 crc kubenswrapper[4769]: I1006 07:16:48.215441 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.215170 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7b2fd898f28d3934973ec4952121cf0866aea13a7d934b3cea54f6842690004d"} Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.215223 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f4360907c223b5437c612afc027fbbfb571e9cf90fcdee4a6f6bb4202f5c104f"} Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.215240 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"79afe458b8a08fdcbf4337e8ab1c39b161f33366d361973da24eb4abbf789e17"} Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.215249 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.215254 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"45b7e66d4379d569799938fb27c5d46685cae705882b2f3bf2e0583cf37e9b94"} Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.215185 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.215383 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.216270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.216308 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.216324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.216572 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.216600 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.216611 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.352563 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.640297 4769 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 07:16:49 crc kubenswrapper[4769]: I1006 07:16:49.640474 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.001339 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.225607 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"18ad02095589e4a61f5c8aa60e464ce771028e2da92fff8e9cc7414d5f42cab4"} Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.225689 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.225697 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.227151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.227213 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.227238 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.227692 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.227760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.227782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.545270 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.546406 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.546468 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.546481 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:50 crc kubenswrapper[4769]: I1006 07:16:50.546510 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 07:16:51 crc kubenswrapper[4769]: I1006 07:16:51.228465 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:51 crc kubenswrapper[4769]: I1006 07:16:51.228993 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:51 crc kubenswrapper[4769]: I1006 07:16:51.230003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:51 crc kubenswrapper[4769]: I1006 07:16:51.230045 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:51 crc kubenswrapper[4769]: I1006 07:16:51.230057 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:51 crc kubenswrapper[4769]: I1006 07:16:51.230086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:51 crc kubenswrapper[4769]: I1006 07:16:51.230111 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:51 crc kubenswrapper[4769]: I1006 07:16:51.230122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:52 crc kubenswrapper[4769]: I1006 07:16:52.501123 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 06 07:16:52 crc kubenswrapper[4769]: I1006 07:16:52.501373 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:52 crc kubenswrapper[4769]: I1006 07:16:52.502623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:52 crc kubenswrapper[4769]: I1006 07:16:52.502654 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:52 crc kubenswrapper[4769]: I1006 07:16:52.502662 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:54 crc kubenswrapper[4769]: I1006 07:16:54.233492 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:54 crc kubenswrapper[4769]: I1006 07:16:54.233671 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:54 crc kubenswrapper[4769]: I1006 07:16:54.234779 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:54 crc kubenswrapper[4769]: I1006 07:16:54.234832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:54 crc kubenswrapper[4769]: I1006 07:16:54.234845 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:54 crc kubenswrapper[4769]: E1006 07:16:54.236321 4769 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 06 07:16:54 crc kubenswrapper[4769]: I1006 07:16:54.239240 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:54 crc kubenswrapper[4769]: I1006 07:16:54.861264 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:55 crc kubenswrapper[4769]: I1006 07:16:55.236893 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:55 crc kubenswrapper[4769]: I1006 07:16:55.239159 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:55 crc kubenswrapper[4769]: I1006 07:16:55.239282 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:55 crc kubenswrapper[4769]: I1006 07:16:55.239348 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:55 crc kubenswrapper[4769]: I1006 07:16:55.242311 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:16:56 crc kubenswrapper[4769]: I1006 07:16:56.238972 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:56 crc kubenswrapper[4769]: I1006 07:16:56.240593 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:56 crc kubenswrapper[4769]: I1006 07:16:56.240641 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:56 crc kubenswrapper[4769]: I1006 07:16:56.240660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:57 crc kubenswrapper[4769]: I1006 07:16:57.241776 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:57 crc kubenswrapper[4769]: I1006 07:16:57.242733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:57 crc kubenswrapper[4769]: I1006 07:16:57.242763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:57 crc kubenswrapper[4769]: I1006 07:16:57.242776 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:58 crc kubenswrapper[4769]: I1006 07:16:58.106203 4769 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Oct 06 07:16:58 crc kubenswrapper[4769]: W1006 07:16:58.129932 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 06 07:16:58 crc kubenswrapper[4769]: I1006 07:16:58.130057 4769 trace.go:236] Trace[1889779622]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 07:16:48.128) (total time: 10001ms): Oct 06 07:16:58 crc kubenswrapper[4769]: Trace[1889779622]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:16:58.129) Oct 06 07:16:58 crc kubenswrapper[4769]: Trace[1889779622]: [10.001560716s] [10.001560716s] END Oct 06 07:16:58 crc kubenswrapper[4769]: E1006 07:16:58.130090 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 06 07:16:58 crc kubenswrapper[4769]: W1006 07:16:58.499777 4769 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Oct 06 07:16:58 crc kubenswrapper[4769]: I1006 07:16:58.499915 4769 trace.go:236] Trace[2103280170]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 07:16:48.498) (total time: 10001ms): Oct 06 07:16:58 crc kubenswrapper[4769]: Trace[2103280170]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (07:16:58.499) Oct 06 07:16:58 crc kubenswrapper[4769]: Trace[2103280170]: [10.001086904s] [10.001086904s] END Oct 06 07:16:58 crc kubenswrapper[4769]: E1006 07:16:58.499976 4769 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Oct 06 07:16:58 crc kubenswrapper[4769]: E1006 07:16:58.772864 4769 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.186bd5a9aa786ebb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-06 07:16:44.104519355 +0000 UTC m=+0.628800502,LastTimestamp:2025-10-06 07:16:44.104519355 +0000 UTC m=+0.628800502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.038528 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.038890 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.040723 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.040781 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.040800 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.130527 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.247883 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.249296 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.249375 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.249393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.262934 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.353299 4769 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.353377 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.473020 4769 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.473135 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.640530 4769 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 06 07:16:59 crc kubenswrapper[4769]: I1006 07:16:59.640660 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 06 07:17:00 crc kubenswrapper[4769]: I1006 07:17:00.250931 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:17:00 crc kubenswrapper[4769]: I1006 07:17:00.252491 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:00 crc kubenswrapper[4769]: I1006 07:17:00.252573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:00 crc kubenswrapper[4769]: I1006 07:17:00.252588 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:01 crc kubenswrapper[4769]: I1006 07:17:01.956852 4769 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 06 07:17:02 crc kubenswrapper[4769]: I1006 07:17:02.531616 4769 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.101478 4769 apiserver.go:52] "Watching apiserver" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.108813 4769 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.109130 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.109577 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:03 crc kubenswrapper[4769]: E1006 07:17:03.109660 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.109718 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.109787 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.110250 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.110309 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.110314 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:03 crc kubenswrapper[4769]: E1006 07:17:03.110360 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:03 crc kubenswrapper[4769]: E1006 07:17:03.110409 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.112878 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.113104 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.113116 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.113302 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.113372 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.114060 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.114186 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.115162 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.115490 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.143605 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.169650 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.182196 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.199006 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.210226 4769 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.212616 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.225600 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:03 crc kubenswrapper[4769]: I1006 07:17:03.237536 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.178918 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.188792 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.198555 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.208953 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.219586 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.230241 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.356339 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.360796 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.367225 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.375198 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.376938 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.386652 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.397170 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.407281 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.416694 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.425918 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.436910 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.447397 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.454695 4769 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.457103 4769 trace.go:236] Trace[759222132]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 07:16:51.094) (total time: 13362ms): Oct 06 07:17:04 crc kubenswrapper[4769]: Trace[759222132]: ---"Objects listed" error: 13362ms (07:17:04.457) Oct 06 07:17:04 crc kubenswrapper[4769]: Trace[759222132]: [13.362129769s] [13.362129769s] END Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.457127 4769 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.457136 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.457569 4769 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.458095 4769 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.458371 4769 trace.go:236] Trace[1232882274]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Oct-2025 07:16:53.714) (total time: 10743ms): Oct 06 07:17:04 crc kubenswrapper[4769]: Trace[1232882274]: ---"Objects listed" error: 10743ms (07:17:04.458) Oct 06 07:17:04 crc kubenswrapper[4769]: Trace[1232882274]: [10.743625828s] [10.743625828s] END Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.458401 4769 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.466772 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.475837 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.488961 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.490556 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.503201 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.514844 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.525498 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.540054 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.551069 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558326 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558377 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558398 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558435 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558459 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558484 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558507 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558527 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558549 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558574 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558595 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558617 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558637 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558655 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558675 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558693 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558714 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558742 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558766 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558790 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558811 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558798 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558834 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558856 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558878 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558899 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558921 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558942 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558985 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559010 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559032 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559056 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559080 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559136 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559160 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559182 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559209 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559247 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559271 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559293 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559314 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559335 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559358 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559382 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559405 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559444 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559468 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559489 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559507 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559530 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559553 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559575 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559596 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559623 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560242 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560278 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560307 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560331 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561354 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561397 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561442 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561467 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561489 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561513 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561539 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561562 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561586 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561610 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561634 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561655 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561681 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562186 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562608 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562647 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562677 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562709 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562734 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562831 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562861 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562908 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562933 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562962 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562989 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563017 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563043 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563070 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563114 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563138 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563166 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563193 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563217 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563241 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563265 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563290 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563325 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563353 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563376 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563402 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563444 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563472 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563498 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563525 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563644 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563673 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563699 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563727 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563751 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563778 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563804 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563830 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563855 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563880 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563906 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563933 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563957 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563980 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563998 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564014 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564038 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564070 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564096 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564121 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564143 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564166 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564188 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564213 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564236 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564259 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564283 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564312 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564336 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564358 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564381 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564404 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564445 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564493 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564511 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564533 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564562 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564587 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564604 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564627 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564651 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564676 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564700 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564727 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564750 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564774 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564797 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564820 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564844 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564866 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564889 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564912 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564939 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564965 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564988 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565012 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565036 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565060 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565088 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565114 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565138 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565160 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565183 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565207 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565229 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565253 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565276 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565300 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565323 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565349 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565373 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565397 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565438 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565466 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565490 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565519 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565545 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565663 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565697 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565723 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565746 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565771 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565794 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565818 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565843 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565869 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565894 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565922 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565947 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565970 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565994 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566020 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558890 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566044 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566071 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566125 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566158 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566185 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566212 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566239 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566264 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566294 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566322 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566349 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566377 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566403 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566448 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566476 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562099 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566502 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.567311 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.567329 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.558923 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559006 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559011 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559069 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559207 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559242 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559315 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.559367 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560045 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560199 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560269 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560277 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560537 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560554 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560734 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560746 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560863 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560908 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.560952 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561108 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561133 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561178 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561252 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561472 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561530 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561566 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561706 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561817 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561879 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561904 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.561940 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562030 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562111 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562133 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.562161 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563433 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563489 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563624 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563736 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563773 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563827 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563920 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.563955 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564058 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564150 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564209 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564197 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564285 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564909 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.564976 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565115 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565169 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565399 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565678 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565791 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.565813 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566030 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566075 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566467 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566496 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.577959 4769 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.577978 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566682 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566694 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.578725 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566841 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.566927 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.567059 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.567160 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.567208 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.567395 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.567385 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.567653 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.567675 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.568668 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.568964 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.568982 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.569076 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.569011 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.569158 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.569898 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:17:05.069870203 +0000 UTC m=+21.594151410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.570779 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.571137 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.571156 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.572549 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.572575 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.572840 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.572943 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.573206 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.573240 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.573408 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.573490 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.573569 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.578865 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.573784 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.574027 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.574128 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.574580 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.574658 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.574812 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.575030 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.575479 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.575489 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.575595 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.575624 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.575618 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.575643 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.575704 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.575752 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.575804 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.575974 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.575991 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.576182 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.576267 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.576322 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.576444 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.576444 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.576783 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.576777 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.576837 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.577134 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.577186 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.577254 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.579007 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.577396 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.577616 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.578943 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.579176 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:05.079152574 +0000 UTC m=+21.603433721 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.579319 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:05.079304568 +0000 UTC m=+21.603585715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.579381 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.579484 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.581455 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.581758 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.581945 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.582196 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.582382 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.582624 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.582800 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.582964 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.584069 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.584382 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.584476 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.584557 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.584575 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.585193 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.586804 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.587373 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.589924 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.590038 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.591836 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.591961 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.592012 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.592087 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.592271 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.592959 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.593053 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.593062 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.593072 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.593110 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.593112 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.593193 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:05.093165858 +0000 UTC m=+21.617447005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.594284 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.594346 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.594719 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.595303 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.595579 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.595843 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.596043 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.596327 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.597060 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.597153 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.597247 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.597301 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.597358 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.597477 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.597557 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.600050 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.600950 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.601683 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.601798 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.602863 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.602888 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.602907 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.601699 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: E1006 07:17:04.603175 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:05.102951534 +0000 UTC m=+21.627232871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.603230 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.603266 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.603505 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.603588 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.603668 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.605151 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.605225 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.605289 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.606330 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.607604 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.606091 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.608636 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.611401 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.612581 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.612817 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.616985 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.617718 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.617946 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.618178 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.618018 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.620396 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.621783 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.621956 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.623557 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.625404 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.628990 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.635265 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.637315 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.642018 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: W1006 07:17:04.647397 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-3cdb94e98a30be78fd84c7771f7f9c474f179d24b6006c581ae210165faf13ac WatchSource:0}: Error finding container 3cdb94e98a30be78fd84c7771f7f9c474f179d24b6006c581ae210165faf13ac: Status 404 returned error can't find the container with id 3cdb94e98a30be78fd84c7771f7f9c474f179d24b6006c581ae210165faf13ac Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.652780 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.653833 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668472 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668538 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668658 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668665 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668751 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668692 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668776 4769 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668788 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668800 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668812 4769 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668841 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668852 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668863 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668873 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668883 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668893 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668919 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668927 4769 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668937 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668946 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668955 4769 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668964 4769 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.668973 4769 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669000 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669013 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669025 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669037 4769 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669047 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669058 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669083 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669092 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669102 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669112 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669121 4769 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669131 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669167 4769 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669177 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669187 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669196 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669205 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669215 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669240 4769 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669250 4769 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669259 4769 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669271 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669281 4769 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669291 4769 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669300 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669326 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669335 4769 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669345 4769 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669355 4769 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669364 4769 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669375 4769 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669398 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669408 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669435 4769 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669447 4769 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669457 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669466 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669477 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669488 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669496 4769 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669522 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669531 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669542 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669551 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669560 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669569 4769 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669594 4769 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669603 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669612 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669620 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669629 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669639 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669649 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669675 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669684 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669692 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669702 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669711 4769 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669754 4769 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669765 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669777 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669787 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669798 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669809 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669836 4769 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669847 4769 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669856 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669865 4769 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669875 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669885 4769 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669911 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669920 4769 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669928 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669939 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669948 4769 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669957 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669967 4769 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.669992 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670002 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670013 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670023 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670031 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670040 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670049 4769 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670076 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670085 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670094 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670104 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670114 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670123 4769 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670147 4769 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670157 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670167 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670178 4769 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670189 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670198 4769 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670228 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670237 4769 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670246 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670255 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670264 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670273 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670283 4769 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670310 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670320 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670329 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670338 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670348 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670359 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670381 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670391 4769 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670400 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670409 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670446 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670456 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670465 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670475 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670483 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670493 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670503 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670529 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670539 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670548 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670558 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.670581 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673448 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673496 4769 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673513 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673531 4769 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673547 4769 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673570 4769 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673584 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673597 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673612 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673627 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673641 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673655 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673668 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673682 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673695 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673709 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673722 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673736 4769 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673750 4769 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673783 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673797 4769 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673811 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673827 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673840 4769 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673853 4769 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673865 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673878 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673891 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673906 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673919 4769 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673936 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673948 4769 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673960 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673979 4769 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.673991 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674003 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674015 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674028 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674041 4769 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674055 4769 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674067 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674087 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674099 4769 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674111 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674123 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674135 4769 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674147 4769 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.674159 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.937974 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 06 07:17:04 crc kubenswrapper[4769]: I1006 07:17:04.947394 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 06 07:17:04 crc kubenswrapper[4769]: W1006 07:17:04.947846 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-f2d41c90279fc2a691aae918803593919de6cc556290522e35443b9f91c0de7c WatchSource:0}: Error finding container f2d41c90279fc2a691aae918803593919de6cc556290522e35443b9f91c0de7c: Status 404 returned error can't find the container with id f2d41c90279fc2a691aae918803593919de6cc556290522e35443b9f91c0de7c Oct 06 07:17:04 crc kubenswrapper[4769]: W1006 07:17:04.979324 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-417edd222efd4498ea495ee04c4e13075d6501b0bb856205f2035c467937adb0 WatchSource:0}: Error finding container 417edd222efd4498ea495ee04c4e13075d6501b0bb856205f2035c467937adb0: Status 404 returned error can't find the container with id 417edd222efd4498ea495ee04c4e13075d6501b0bb856205f2035c467937adb0 Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.078621 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.078838 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:17:06.07879439 +0000 UTC m=+22.603075537 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.165316 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.165370 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.165473 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.165324 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.165545 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.165647 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.179546 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.179589 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.179611 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.179633 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.179811 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.179840 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.179869 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.179876 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.179912 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.179923 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.179890 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.179991 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.179898 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:06.179880415 +0000 UTC m=+22.704161562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.180068 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:06.18004458 +0000 UTC m=+22.704325767 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.180096 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:06.180087141 +0000 UTC m=+22.704368378 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:05 crc kubenswrapper[4769]: E1006 07:17:05.180131 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:06.180102492 +0000 UTC m=+22.704383779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.264566 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313"} Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.264614 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"417edd222efd4498ea495ee04c4e13075d6501b0bb856205f2035c467937adb0"} Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.265957 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f2d41c90279fc2a691aae918803593919de6cc556290522e35443b9f91c0de7c"} Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.271707 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990"} Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.271787 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91"} Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.271803 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3cdb94e98a30be78fd84c7771f7f9c474f179d24b6006c581ae210165faf13ac"} Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.282257 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.293248 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.305785 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.317083 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.328767 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.339329 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.349928 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.360463 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.371202 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.380280 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.392549 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.405067 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.415400 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:05 crc kubenswrapper[4769]: I1006 07:17:05.426157 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.087096 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.087326 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:17:08.08728778 +0000 UTC m=+24.611568927 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.149534 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-s8l5j"] Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.149867 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rlfqr"] Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.150139 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.150308 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8bknc"] Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.150512 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-s8l5j" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.151544 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-cjjvp"] Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.151701 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-bq98f"] Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.152130 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.152283 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.152598 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.152972 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.158996 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.159526 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.159686 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.159784 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.159880 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.159902 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.160117 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.160124 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.160183 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.160272 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.160354 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.160380 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.160355 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.160498 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.160585 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.160537 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.160749 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.160957 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.161040 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.161094 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.162221 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.175585 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.176695 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.178152 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.178890 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.179543 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.181249 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.182112 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.182940 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.184135 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.184955 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.186312 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.186827 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.187683 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.187719 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.187745 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.187764 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.187884 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.187901 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.187928 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.187940 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.187962 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:08.187926913 +0000 UTC m=+24.712208050 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.188013 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:08.187990835 +0000 UTC m=+24.712271972 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.188042 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.188097 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:08.188088097 +0000 UTC m=+24.712369244 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.188176 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.188194 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.188204 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:06 crc kubenswrapper[4769]: E1006 07:17:06.188235 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:08.188229051 +0000 UTC m=+24.712510198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.188468 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.188947 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.189716 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.190869 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.191372 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.192359 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.192810 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.193445 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.193519 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.194510 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.194997 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.195991 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.196501 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.197538 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.197959 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.198603 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.199665 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.200440 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.201206 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.201830 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.202416 4769 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.202606 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.203357 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.205653 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.206206 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.207766 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.209670 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.210486 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.211554 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.212096 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.212394 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.213727 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.214343 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.215709 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.216520 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.217869 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.218491 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.219543 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.220073 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.221106 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.221298 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.221777 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.222672 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.223202 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.224167 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.224768 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.225273 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.231706 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.241898 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.250902 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.260323 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.269963 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.278906 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288591 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-cni-bin\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288647 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-env-overrides\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288670 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-var-lib-kubelet\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288695 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288717 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-multus-cni-dir\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288739 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-run-k8s-cni-cncf-io\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288764 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3b98abd5-990e-494c-a2a5-526fae1bd5ec-multus-daemon-config\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288786 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-etc-kubernetes\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288824 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-slash\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288845 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-var-lib-openvswitch\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288868 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-cni-netd\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288893 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-multus-socket-dir-parent\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288916 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d25975c2-003c-4557-902c-2ccbc18d0881-os-release\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288939 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-run-ovn-kubernetes\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288961 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sskw8\" (UniqueName: \"kubernetes.io/projected/084bbba5-5940-4065-a799-2e6baff2338d-kube-api-access-sskw8\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.288985 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-ovn\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289008 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ff761ae3-3c80-40f1-9aff-ea1585a9199f-rootfs\") pod \"machine-config-daemon-rlfqr\" (UID: \"ff761ae3-3c80-40f1-9aff-ea1585a9199f\") " pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289030 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-hostroot\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289052 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d25975c2-003c-4557-902c-2ccbc18d0881-system-cni-dir\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289075 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d25975c2-003c-4557-902c-2ccbc18d0881-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289097 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/084bbba5-5940-4065-a799-2e6baff2338d-ovn-node-metrics-cert\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289118 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-system-cni-dir\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289139 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b98abd5-990e-494c-a2a5-526fae1bd5ec-cni-binary-copy\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289161 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d25975c2-003c-4557-902c-2ccbc18d0881-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289183 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-run-multus-certs\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289202 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-run-netns\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289223 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8lpx\" (UniqueName: \"kubernetes.io/projected/3b98abd5-990e-494c-a2a5-526fae1bd5ec-kube-api-access-p8lpx\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289246 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-systemd\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289269 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-os-release\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289307 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff761ae3-3c80-40f1-9aff-ea1585a9199f-mcd-auth-proxy-config\") pod \"machine-config-daemon-rlfqr\" (UID: \"ff761ae3-3c80-40f1-9aff-ea1585a9199f\") " pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289330 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-etc-openvswitch\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289352 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-ovnkube-script-lib\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289390 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-openvswitch\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289412 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/32047d07-7551-41a0-8669-c5ee1674290c-hosts-file\") pod \"node-resolver-s8l5j\" (UID: \"32047d07-7551-41a0-8669-c5ee1674290c\") " pod="openshift-dns/node-resolver-s8l5j" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289469 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-cnibin\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289503 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-kubelet\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289524 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-run-netns\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289544 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-ovnkube-config\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289565 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-var-lib-cni-multus\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289588 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-log-socket\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289608 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-var-lib-cni-bin\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289630 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff761ae3-3c80-40f1-9aff-ea1585a9199f-proxy-tls\") pod \"machine-config-daemon-rlfqr\" (UID: \"ff761ae3-3c80-40f1-9aff-ea1585a9199f\") " pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289652 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qtl\" (UniqueName: \"kubernetes.io/projected/ff761ae3-3c80-40f1-9aff-ea1585a9199f-kube-api-access-97qtl\") pod \"machine-config-daemon-rlfqr\" (UID: \"ff761ae3-3c80-40f1-9aff-ea1585a9199f\") " pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289671 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-systemd-units\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289695 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66f8k\" (UniqueName: \"kubernetes.io/projected/32047d07-7551-41a0-8669-c5ee1674290c-kube-api-access-66f8k\") pod \"node-resolver-s8l5j\" (UID: \"32047d07-7551-41a0-8669-c5ee1674290c\") " pod="openshift-dns/node-resolver-s8l5j" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289719 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-multus-conf-dir\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289746 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d25975c2-003c-4557-902c-2ccbc18d0881-cnibin\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289769 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c8gm\" (UniqueName: \"kubernetes.io/projected/d25975c2-003c-4557-902c-2ccbc18d0881-kube-api-access-5c8gm\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289802 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-node-log\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.289824 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d25975c2-003c-4557-902c-2ccbc18d0881-cni-binary-copy\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.290938 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.300690 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.309330 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.319026 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.328554 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.338714 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.348177 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.357172 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.367005 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.383933 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390209 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-slash\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390257 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-var-lib-openvswitch\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390281 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-cni-netd\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390303 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-multus-socket-dir-parent\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390326 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-run-k8s-cni-cncf-io\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390348 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3b98abd5-990e-494c-a2a5-526fae1bd5ec-multus-daemon-config\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390368 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-etc-kubernetes\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390390 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d25975c2-003c-4557-902c-2ccbc18d0881-os-release\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390400 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-multus-socket-dir-parent\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390411 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-ovn\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390410 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-var-lib-openvswitch\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390452 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-run-ovn-kubernetes\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390476 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sskw8\" (UniqueName: \"kubernetes.io/projected/084bbba5-5940-4065-a799-2e6baff2338d-kube-api-access-sskw8\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390439 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-run-k8s-cni-cncf-io\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390501 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ff761ae3-3c80-40f1-9aff-ea1585a9199f-rootfs\") pod \"machine-config-daemon-rlfqr\" (UID: \"ff761ae3-3c80-40f1-9aff-ea1585a9199f\") " pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390522 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-run-ovn-kubernetes\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390522 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-hostroot\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390555 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-hostroot\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390566 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d25975c2-003c-4557-902c-2ccbc18d0881-system-cni-dir\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390403 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-cni-netd\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390601 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d25975c2-003c-4557-902c-2ccbc18d0881-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390625 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-system-cni-dir\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390630 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-etc-kubernetes\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390640 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b98abd5-990e-494c-a2a5-526fae1bd5ec-cni-binary-copy\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390689 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d25975c2-003c-4557-902c-2ccbc18d0881-os-release\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390735 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ff761ae3-3c80-40f1-9aff-ea1585a9199f-rootfs\") pod \"machine-config-daemon-rlfqr\" (UID: \"ff761ae3-3c80-40f1-9aff-ea1585a9199f\") " pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390736 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d25975c2-003c-4557-902c-2ccbc18d0881-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390778 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/084bbba5-5940-4065-a799-2e6baff2338d-ovn-node-metrics-cert\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390802 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-run-multus-certs\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390825 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-systemd\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390845 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-run-netns\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390865 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8lpx\" (UniqueName: \"kubernetes.io/projected/3b98abd5-990e-494c-a2a5-526fae1bd5ec-kube-api-access-p8lpx\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390887 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-os-release\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390918 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff761ae3-3c80-40f1-9aff-ea1585a9199f-mcd-auth-proxy-config\") pod \"machine-config-daemon-rlfqr\" (UID: \"ff761ae3-3c80-40f1-9aff-ea1585a9199f\") " pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390939 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-etc-openvswitch\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390959 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-ovnkube-script-lib\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390979 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-openvswitch\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391000 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/32047d07-7551-41a0-8669-c5ee1674290c-hosts-file\") pod \"node-resolver-s8l5j\" (UID: \"32047d07-7551-41a0-8669-c5ee1674290c\") " pod="openshift-dns/node-resolver-s8l5j" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391038 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-cnibin\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391061 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-kubelet\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391080 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-run-netns\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391101 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-ovnkube-config\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391127 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-var-lib-cni-multus\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390336 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-slash\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391164 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3b98abd5-990e-494c-a2a5-526fae1bd5ec-multus-daemon-config\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391153 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff761ae3-3c80-40f1-9aff-ea1585a9199f-proxy-tls\") pod \"machine-config-daemon-rlfqr\" (UID: \"ff761ae3-3c80-40f1-9aff-ea1585a9199f\") " pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391228 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97qtl\" (UniqueName: \"kubernetes.io/projected/ff761ae3-3c80-40f1-9aff-ea1585a9199f-kube-api-access-97qtl\") pod \"machine-config-daemon-rlfqr\" (UID: \"ff761ae3-3c80-40f1-9aff-ea1585a9199f\") " pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391220 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-ovn\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391269 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3b98abd5-990e-494c-a2a5-526fae1bd5ec-cni-binary-copy\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391272 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-systemd-units\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391302 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-systemd-units\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391315 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-log-socket\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391412 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-var-lib-cni-bin\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391455 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-system-cni-dir\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391220 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d25975c2-003c-4557-902c-2ccbc18d0881-system-cni-dir\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.390867 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-run-multus-certs\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391509 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-multus-conf-dir\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391547 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-run-netns\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391561 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d25975c2-003c-4557-902c-2ccbc18d0881-cnibin\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391519 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-systemd\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391746 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-var-lib-cni-multus\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391756 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d25975c2-003c-4557-902c-2ccbc18d0881-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391776 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-log-socket\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391771 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c8gm\" (UniqueName: \"kubernetes.io/projected/d25975c2-003c-4557-902c-2ccbc18d0881-kube-api-access-5c8gm\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391827 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-os-release\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391834 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66f8k\" (UniqueName: \"kubernetes.io/projected/32047d07-7551-41a0-8669-c5ee1674290c-kube-api-access-66f8k\") pod \"node-resolver-s8l5j\" (UID: \"32047d07-7551-41a0-8669-c5ee1674290c\") " pod="openshift-dns/node-resolver-s8l5j" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391901 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-node-log\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391903 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-kubelet\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391946 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-node-log\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391802 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-multus-conf-dir\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391969 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/32047d07-7551-41a0-8669-c5ee1674290c-hosts-file\") pod \"node-resolver-s8l5j\" (UID: \"32047d07-7551-41a0-8669-c5ee1674290c\") " pod="openshift-dns/node-resolver-s8l5j" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391997 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d25975c2-003c-4557-902c-2ccbc18d0881-cnibin\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392006 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-run-netns\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.391835 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-cnibin\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392029 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d25975c2-003c-4557-902c-2ccbc18d0881-cni-binary-copy\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392036 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-openvswitch\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392089 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-var-lib-cni-bin\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392097 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-etc-openvswitch\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392324 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-cni-bin\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392446 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-env-overrides\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392476 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-var-lib-kubelet\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392559 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-cni-bin\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392568 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392636 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-multus-cni-dir\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392688 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff761ae3-3c80-40f1-9aff-ea1585a9199f-mcd-auth-proxy-config\") pod \"machine-config-daemon-rlfqr\" (UID: \"ff761ae3-3c80-40f1-9aff-ea1585a9199f\") " pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392722 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d25975c2-003c-4557-902c-2ccbc18d0881-cni-binary-copy\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392740 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.392863 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-host-var-lib-kubelet\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.393026 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3b98abd5-990e-494c-a2a5-526fae1bd5ec-multus-cni-dir\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.393211 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-ovnkube-config\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.393482 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-ovnkube-script-lib\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.393543 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-env-overrides\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.394507 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d25975c2-003c-4557-902c-2ccbc18d0881-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.398947 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff761ae3-3c80-40f1-9aff-ea1585a9199f-proxy-tls\") pod \"machine-config-daemon-rlfqr\" (UID: \"ff761ae3-3c80-40f1-9aff-ea1585a9199f\") " pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.406950 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/084bbba5-5940-4065-a799-2e6baff2338d-ovn-node-metrics-cert\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.413637 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97qtl\" (UniqueName: \"kubernetes.io/projected/ff761ae3-3c80-40f1-9aff-ea1585a9199f-kube-api-access-97qtl\") pod \"machine-config-daemon-rlfqr\" (UID: \"ff761ae3-3c80-40f1-9aff-ea1585a9199f\") " pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.414021 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66f8k\" (UniqueName: \"kubernetes.io/projected/32047d07-7551-41a0-8669-c5ee1674290c-kube-api-access-66f8k\") pod \"node-resolver-s8l5j\" (UID: \"32047d07-7551-41a0-8669-c5ee1674290c\") " pod="openshift-dns/node-resolver-s8l5j" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.414267 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sskw8\" (UniqueName: \"kubernetes.io/projected/084bbba5-5940-4065-a799-2e6baff2338d-kube-api-access-sskw8\") pod \"ovnkube-node-8bknc\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.417007 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8lpx\" (UniqueName: \"kubernetes.io/projected/3b98abd5-990e-494c-a2a5-526fae1bd5ec-kube-api-access-p8lpx\") pod \"multus-cjjvp\" (UID: \"3b98abd5-990e-494c-a2a5-526fae1bd5ec\") " pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.420913 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c8gm\" (UniqueName: \"kubernetes.io/projected/d25975c2-003c-4557-902c-2ccbc18d0881-kube-api-access-5c8gm\") pod \"multus-additional-cni-plugins-bq98f\" (UID: \"d25975c2-003c-4557-902c-2ccbc18d0881\") " pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.461566 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.473866 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-s8l5j" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.478797 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.484782 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bq98f" Oct 06 07:17:06 crc kubenswrapper[4769]: W1006 07:17:06.485453 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32047d07_7551_41a0_8669_c5ee1674290c.slice/crio-3972c4dbba1aefcfb91a899333a14124f89fd779c5dd85583e8756366e0aa5ed WatchSource:0}: Error finding container 3972c4dbba1aefcfb91a899333a14124f89fd779c5dd85583e8756366e0aa5ed: Status 404 returned error can't find the container with id 3972c4dbba1aefcfb91a899333a14124f89fd779c5dd85583e8756366e0aa5ed Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.490467 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cjjvp" Oct 06 07:17:06 crc kubenswrapper[4769]: W1006 07:17:06.527184 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod084bbba5_5940_4065_a799_2e6baff2338d.slice/crio-6252a133a51df23fc250984851eb438a448f6822704a58a273d089de0e30221c WatchSource:0}: Error finding container 6252a133a51df23fc250984851eb438a448f6822704a58a273d089de0e30221c: Status 404 returned error can't find the container with id 6252a133a51df23fc250984851eb438a448f6822704a58a273d089de0e30221c Oct 06 07:17:06 crc kubenswrapper[4769]: W1006 07:17:06.528058 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b98abd5_990e_494c_a2a5_526fae1bd5ec.slice/crio-20794df000ea72489cc5fc6ab9156239a48a09ab20559761ec738780d01776d8 WatchSource:0}: Error finding container 20794df000ea72489cc5fc6ab9156239a48a09ab20559761ec738780d01776d8: Status 404 returned error can't find the container with id 20794df000ea72489cc5fc6ab9156239a48a09ab20559761ec738780d01776d8 Oct 06 07:17:06 crc kubenswrapper[4769]: W1006 07:17:06.529017 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd25975c2_003c_4557_902c_2ccbc18d0881.slice/crio-c211ca73925da56c64c538014f7b8dc6cd3355c3769a16a6dcfcbe1ad452eea8 WatchSource:0}: Error finding container c211ca73925da56c64c538014f7b8dc6cd3355c3769a16a6dcfcbe1ad452eea8: Status 404 returned error can't find the container with id c211ca73925da56c64c538014f7b8dc6cd3355c3769a16a6dcfcbe1ad452eea8 Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.643838 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.653817 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.661527 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.668176 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.680938 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.693610 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.706355 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.723714 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.737476 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.750063 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.767861 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.783562 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.797593 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.812556 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.824012 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.836327 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.846802 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.864751 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.877215 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.886928 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.903075 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.915665 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.926012 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.939210 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.952960 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.967533 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.980743 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:06 crc kubenswrapper[4769]: I1006 07:17:06.993394 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:06Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.165061 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.165116 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.165165 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:07 crc kubenswrapper[4769]: E1006 07:17:07.165208 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:07 crc kubenswrapper[4769]: E1006 07:17:07.165349 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:07 crc kubenswrapper[4769]: E1006 07:17:07.165467 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.278466 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" event={"ID":"d25975c2-003c-4557-902c-2ccbc18d0881","Type":"ContainerStarted","Data":"b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9"} Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.278527 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" event={"ID":"d25975c2-003c-4557-902c-2ccbc18d0881","Type":"ContainerStarted","Data":"c211ca73925da56c64c538014f7b8dc6cd3355c3769a16a6dcfcbe1ad452eea8"} Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.279741 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjjvp" event={"ID":"3b98abd5-990e-494c-a2a5-526fae1bd5ec","Type":"ContainerStarted","Data":"7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0"} Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.279801 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjjvp" event={"ID":"3b98abd5-990e-494c-a2a5-526fae1bd5ec","Type":"ContainerStarted","Data":"20794df000ea72489cc5fc6ab9156239a48a09ab20559761ec738780d01776d8"} Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.281379 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-s8l5j" event={"ID":"32047d07-7551-41a0-8669-c5ee1674290c","Type":"ContainerStarted","Data":"5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746"} Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.281433 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-s8l5j" event={"ID":"32047d07-7551-41a0-8669-c5ee1674290c","Type":"ContainerStarted","Data":"3972c4dbba1aefcfb91a899333a14124f89fd779c5dd85583e8756366e0aa5ed"} Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.283118 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959"} Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.283159 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205"} Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.283172 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"aa73e963bf099751d40ab3a2ac38d9f953efc756ed58d4c4cb5d9149b934ef87"} Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.284512 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537" exitCode=0 Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.284585 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537"} Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.284623 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"6252a133a51df23fc250984851eb438a448f6822704a58a273d089de0e30221c"} Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.296013 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.308932 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.325560 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.338764 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.352485 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.364752 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.375977 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.389963 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.403970 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.423469 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.436903 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.449047 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.465126 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.480874 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.493569 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.505909 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.571179 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.598182 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.615464 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.628871 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.641797 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.653673 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.718740 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.730698 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.744150 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:07 crc kubenswrapper[4769]: I1006 07:17:07.759893 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:07Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.106022 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.106364 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:17:12.106311736 +0000 UTC m=+28.630592913 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.207217 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.207266 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.207289 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.207314 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.207385 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.207409 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.207480 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:12.207457884 +0000 UTC m=+28.731739031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.207532 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:12.207491615 +0000 UTC m=+28.731772752 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.207557 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.207602 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.207620 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.207677 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.207745 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.207776 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.207712 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:12.20768632 +0000 UTC m=+28.731967477 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:08 crc kubenswrapper[4769]: E1006 07:17:08.207909 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:12.207869996 +0000 UTC m=+28.732151303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.291007 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7"} Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.292151 4769 generic.go:334] "Generic (PLEG): container finished" podID="d25975c2-003c-4557-902c-2ccbc18d0881" containerID="b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9" exitCode=0 Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.292232 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" event={"ID":"d25975c2-003c-4557-902c-2ccbc18d0881","Type":"ContainerDied","Data":"b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9"} Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.294765 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c"} Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.294821 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61"} Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.294846 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4"} Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.294864 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd"} Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.307836 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.320100 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.337775 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.354251 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.377200 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.390711 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.409925 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.425250 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.441123 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.453456 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.468404 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.484503 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.504603 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.519659 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.532231 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.545965 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.564683 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.577135 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.591460 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.607889 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.624841 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.647785 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.673330 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.689088 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.701458 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:08 crc kubenswrapper[4769]: I1006 07:17:08.717682 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:08Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.165887 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:09 crc kubenswrapper[4769]: E1006 07:17:09.166391 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.165911 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.165981 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:09 crc kubenswrapper[4769]: E1006 07:17:09.166604 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:09 crc kubenswrapper[4769]: E1006 07:17:09.166656 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.302636 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016"} Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.302687 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb"} Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.304874 4769 generic.go:334] "Generic (PLEG): container finished" podID="d25975c2-003c-4557-902c-2ccbc18d0881" containerID="d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d" exitCode=0 Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.304976 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" event={"ID":"d25975c2-003c-4557-902c-2ccbc18d0881","Type":"ContainerDied","Data":"d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d"} Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.319746 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.338832 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.355290 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.379710 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.399344 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.418214 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.435049 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.451672 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.467087 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.489606 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.523011 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.548461 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:09 crc kubenswrapper[4769]: I1006 07:17:09.559029 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:09Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.311572 4769 generic.go:334] "Generic (PLEG): container finished" podID="d25975c2-003c-4557-902c-2ccbc18d0881" containerID="59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341" exitCode=0 Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.311634 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" event={"ID":"d25975c2-003c-4557-902c-2ccbc18d0881","Type":"ContainerDied","Data":"59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341"} Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.328678 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.349583 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.373928 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.399518 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.422618 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.441911 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.460371 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.481262 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.540496 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.557914 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.586691 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.606667 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.621004 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.858897 4769 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.861439 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.861492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.861506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.861612 4769 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.869342 4769 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.869632 4769 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.870815 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.870840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.870849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.870872 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.870883 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:10Z","lastTransitionTime":"2025-10-06T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:10 crc kubenswrapper[4769]: E1006 07:17:10.888151 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.896267 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.897131 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.897169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.897203 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.897225 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:10Z","lastTransitionTime":"2025-10-06T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:10 crc kubenswrapper[4769]: E1006 07:17:10.915640 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.921290 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.921333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.921343 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.921366 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.921379 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:10Z","lastTransitionTime":"2025-10-06T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:10 crc kubenswrapper[4769]: E1006 07:17:10.943266 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.947636 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.947673 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.947681 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.947698 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.947709 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:10Z","lastTransitionTime":"2025-10-06T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:10 crc kubenswrapper[4769]: E1006 07:17:10.959975 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.963194 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.963221 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.963231 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.963243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.963252 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:10Z","lastTransitionTime":"2025-10-06T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:10 crc kubenswrapper[4769]: E1006 07:17:10.974465 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:10Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:10 crc kubenswrapper[4769]: E1006 07:17:10.974584 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.976435 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.976465 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.976473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.976490 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:10 crc kubenswrapper[4769]: I1006 07:17:10.976501 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:10Z","lastTransitionTime":"2025-10-06T07:17:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.079458 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.079521 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.079538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.079565 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.079584 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:11Z","lastTransitionTime":"2025-10-06T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.165921 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.165974 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.166058 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:11 crc kubenswrapper[4769]: E1006 07:17:11.166165 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:11 crc kubenswrapper[4769]: E1006 07:17:11.166528 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:11 crc kubenswrapper[4769]: E1006 07:17:11.166601 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.182071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.182118 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.182130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.182151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.182165 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:11Z","lastTransitionTime":"2025-10-06T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.285683 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.285770 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.285791 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.285817 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.285836 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:11Z","lastTransitionTime":"2025-10-06T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.318223 4769 generic.go:334] "Generic (PLEG): container finished" podID="d25975c2-003c-4557-902c-2ccbc18d0881" containerID="7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e" exitCode=0 Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.318319 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" event={"ID":"d25975c2-003c-4557-902c-2ccbc18d0881","Type":"ContainerDied","Data":"7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e"} Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.324702 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb"} Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.344161 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.362930 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.385067 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.388407 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.388454 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.388463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.388478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.388490 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:11Z","lastTransitionTime":"2025-10-06T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.408559 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.427387 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.456035 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.474114 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.486765 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.491024 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.491055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.491068 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.491086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.491098 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:11Z","lastTransitionTime":"2025-10-06T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.508154 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.525742 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.545913 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.565255 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.582474 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.594314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.594391 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.594453 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.594495 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.594522 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:11Z","lastTransitionTime":"2025-10-06T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.698322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.698380 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.698399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.698457 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.698476 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:11Z","lastTransitionTime":"2025-10-06T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.793357 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-b48h2"] Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.794239 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-b48h2" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.796910 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.798175 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.798487 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.800331 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.801587 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.801649 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.801670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.801698 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.801719 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:11Z","lastTransitionTime":"2025-10-06T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.814359 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.833860 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.868816 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.888919 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.904725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.904800 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.905169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.905213 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.905237 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:11Z","lastTransitionTime":"2025-10-06T07:17:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.907614 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.927182 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.947934 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.958508 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f3d333f-88b0-49d3-a503-ee2c3a48c17b-host\") pod \"node-ca-b48h2\" (UID: \"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\") " pod="openshift-image-registry/node-ca-b48h2" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.958582 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnwcb\" (UniqueName: \"kubernetes.io/projected/3f3d333f-88b0-49d3-a503-ee2c3a48c17b-kube-api-access-vnwcb\") pod \"node-ca-b48h2\" (UID: \"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\") " pod="openshift-image-registry/node-ca-b48h2" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.958627 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3f3d333f-88b0-49d3-a503-ee2c3a48c17b-serviceca\") pod \"node-ca-b48h2\" (UID: \"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\") " pod="openshift-image-registry/node-ca-b48h2" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.971805 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:11 crc kubenswrapper[4769]: I1006 07:17:11.987934 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:11Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.004527 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.008659 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.008703 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.008715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.008733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.008745 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:12Z","lastTransitionTime":"2025-10-06T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.026730 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.047622 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.059469 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f3d333f-88b0-49d3-a503-ee2c3a48c17b-host\") pod \"node-ca-b48h2\" (UID: \"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\") " pod="openshift-image-registry/node-ca-b48h2" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.059535 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnwcb\" (UniqueName: \"kubernetes.io/projected/3f3d333f-88b0-49d3-a503-ee2c3a48c17b-kube-api-access-vnwcb\") pod \"node-ca-b48h2\" (UID: \"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\") " pod="openshift-image-registry/node-ca-b48h2" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.059575 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3f3d333f-88b0-49d3-a503-ee2c3a48c17b-serviceca\") pod \"node-ca-b48h2\" (UID: \"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\") " pod="openshift-image-registry/node-ca-b48h2" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.059620 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f3d333f-88b0-49d3-a503-ee2c3a48c17b-host\") pod \"node-ca-b48h2\" (UID: \"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\") " pod="openshift-image-registry/node-ca-b48h2" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.062334 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3f3d333f-88b0-49d3-a503-ee2c3a48c17b-serviceca\") pod \"node-ca-b48h2\" (UID: \"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\") " pod="openshift-image-registry/node-ca-b48h2" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.068561 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.083504 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.094669 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnwcb\" (UniqueName: \"kubernetes.io/projected/3f3d333f-88b0-49d3-a503-ee2c3a48c17b-kube-api-access-vnwcb\") pod \"node-ca-b48h2\" (UID: \"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\") " pod="openshift-image-registry/node-ca-b48h2" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.112085 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.112140 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.112155 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.112175 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.112190 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:12Z","lastTransitionTime":"2025-10-06T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.118713 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-b48h2" Oct 06 07:17:12 crc kubenswrapper[4769]: W1006 07:17:12.141850 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f3d333f_88b0_49d3_a503_ee2c3a48c17b.slice/crio-5c47c9836b88f70f4f81ff7b46d1fe05848d263536ee2273f4e73b2b70a8513f WatchSource:0}: Error finding container 5c47c9836b88f70f4f81ff7b46d1fe05848d263536ee2273f4e73b2b70a8513f: Status 404 returned error can't find the container with id 5c47c9836b88f70f4f81ff7b46d1fe05848d263536ee2273f4e73b2b70a8513f Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.160563 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.160860 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:17:20.160838384 +0000 UTC m=+36.685119531 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.216450 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.216493 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.216504 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.216525 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.216537 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:12Z","lastTransitionTime":"2025-10-06T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.262222 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.262293 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.262361 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.262397 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.262512 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.262565 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.262519 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.262604 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.262630 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.262648 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.262664 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:20.26264036 +0000 UTC m=+36.786921517 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.262582 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.262713 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:20.262692171 +0000 UTC m=+36.786973358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.262731 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:20.262723252 +0000 UTC m=+36.787004409 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.263032 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:12 crc kubenswrapper[4769]: E1006 07:17:12.263217 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:20.263182125 +0000 UTC m=+36.787463302 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.319624 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.319690 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.319711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.319738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.319762 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:12Z","lastTransitionTime":"2025-10-06T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.332373 4769 generic.go:334] "Generic (PLEG): container finished" podID="d25975c2-003c-4557-902c-2ccbc18d0881" containerID="7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7" exitCode=0 Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.332490 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" event={"ID":"d25975c2-003c-4557-902c-2ccbc18d0881","Type":"ContainerDied","Data":"7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7"} Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.333504 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-b48h2" event={"ID":"3f3d333f-88b0-49d3-a503-ee2c3a48c17b","Type":"ContainerStarted","Data":"5c47c9836b88f70f4f81ff7b46d1fe05848d263536ee2273f4e73b2b70a8513f"} Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.348345 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.369969 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.386347 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.405519 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.420941 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.422256 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.422298 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.422308 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.422327 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.422338 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:12Z","lastTransitionTime":"2025-10-06T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.439389 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.455231 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.472010 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.488524 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.520760 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.526273 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.526313 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.526324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.526342 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.526354 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:12Z","lastTransitionTime":"2025-10-06T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.539534 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.554187 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.572136 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.586153 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.628973 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.629012 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.629024 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.629042 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.629054 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:12Z","lastTransitionTime":"2025-10-06T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.732388 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.732449 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.732459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.732475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.732484 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:12Z","lastTransitionTime":"2025-10-06T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.835476 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.835540 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.835557 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.835583 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.835601 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:12Z","lastTransitionTime":"2025-10-06T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.939259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.939319 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.939339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.939361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:12 crc kubenswrapper[4769]: I1006 07:17:12.939380 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:12Z","lastTransitionTime":"2025-10-06T07:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.042943 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.042981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.042996 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.043013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.043023 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:13Z","lastTransitionTime":"2025-10-06T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.145754 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.146129 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.146145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.146166 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.146181 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:13Z","lastTransitionTime":"2025-10-06T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.165254 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.165360 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.165343 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:13 crc kubenswrapper[4769]: E1006 07:17:13.165518 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:13 crc kubenswrapper[4769]: E1006 07:17:13.165666 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:13 crc kubenswrapper[4769]: E1006 07:17:13.165819 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.248849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.248907 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.248927 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.248954 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.248973 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:13Z","lastTransitionTime":"2025-10-06T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.347406 4769 generic.go:334] "Generic (PLEG): container finished" podID="d25975c2-003c-4557-902c-2ccbc18d0881" containerID="67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a" exitCode=0 Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.347491 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" event={"ID":"d25975c2-003c-4557-902c-2ccbc18d0881","Type":"ContainerDied","Data":"67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.353196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.353257 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.353275 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.353337 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.353362 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:13Z","lastTransitionTime":"2025-10-06T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.358009 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.358398 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.358515 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.360455 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-b48h2" event={"ID":"3f3d333f-88b0-49d3-a503-ee2c3a48c17b","Type":"ContainerStarted","Data":"a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.388145 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.397326 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.415157 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.423505 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.424165 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.426221 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.440786 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.454897 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.456351 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.456376 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.456385 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.456399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.456410 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:13Z","lastTransitionTime":"2025-10-06T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.467988 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.485118 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.505621 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.519774 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.531687 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.542576 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.554551 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.561065 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.561123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.561137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.561157 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.561175 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:13Z","lastTransitionTime":"2025-10-06T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.576391 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.587882 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.597769 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.611009 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.627692 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.666402 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.666468 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.666481 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.666501 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.666516 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:13Z","lastTransitionTime":"2025-10-06T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.666484 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.680705 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.697201 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.710980 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.723470 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.740995 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.754815 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.765569 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.769992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.770041 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.770052 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.770069 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.770085 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:13Z","lastTransitionTime":"2025-10-06T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.783766 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.798032 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.872663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.872734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.872755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.872786 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.872814 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:13Z","lastTransitionTime":"2025-10-06T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.975270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.975326 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.975338 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.975354 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:13 crc kubenswrapper[4769]: I1006 07:17:13.975368 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:13Z","lastTransitionTime":"2025-10-06T07:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.078002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.078041 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.078052 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.078069 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.078078 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:14Z","lastTransitionTime":"2025-10-06T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.179505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.179547 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.179558 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.179573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.179584 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:14Z","lastTransitionTime":"2025-10-06T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.189178 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.210106 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.224704 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.237764 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.252216 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.264289 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.274010 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.281687 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.281768 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.281789 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.281820 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.281844 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:14Z","lastTransitionTime":"2025-10-06T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.292400 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.304319 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.314753 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.357543 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.367561 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" event={"ID":"d25975c2-003c-4557-902c-2ccbc18d0881","Type":"ContainerStarted","Data":"856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea"} Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.367659 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.384368 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.384400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.384410 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.384463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.384476 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:14Z","lastTransitionTime":"2025-10-06T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.399518 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.418922 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.430902 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.486456 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.486498 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.486508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.486523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.486533 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:14Z","lastTransitionTime":"2025-10-06T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.588769 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.589130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.589138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.589151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.589166 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:14Z","lastTransitionTime":"2025-10-06T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.691052 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.691080 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.691089 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.691103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.691113 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:14Z","lastTransitionTime":"2025-10-06T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.794390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.794469 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.794487 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.794509 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.794528 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:14Z","lastTransitionTime":"2025-10-06T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.896888 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.896930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.896941 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.896958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.896972 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:14Z","lastTransitionTime":"2025-10-06T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.999074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.999104 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.999121 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.999133 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:14 crc kubenswrapper[4769]: I1006 07:17:14.999143 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:14Z","lastTransitionTime":"2025-10-06T07:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.100830 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.100915 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.100943 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.100975 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.101001 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:15Z","lastTransitionTime":"2025-10-06T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.165718 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.165773 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:15 crc kubenswrapper[4769]: E1006 07:17:15.165915 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.165941 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:15 crc kubenswrapper[4769]: E1006 07:17:15.166095 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:15 crc kubenswrapper[4769]: E1006 07:17:15.166347 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.203999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.204091 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.204172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.204217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.204276 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:15Z","lastTransitionTime":"2025-10-06T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.308954 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.309013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.309049 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.309070 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.309089 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:15Z","lastTransitionTime":"2025-10-06T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.369609 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.383179 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.395190 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.408737 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.411239 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.411290 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.411306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.411327 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.411344 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:15Z","lastTransitionTime":"2025-10-06T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.422411 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.434648 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.452069 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.463374 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.473975 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.486882 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.497032 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.510461 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.514064 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.514131 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.514140 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.514156 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.514166 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:15Z","lastTransitionTime":"2025-10-06T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.523638 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.534177 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.545700 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:15Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.616965 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.617006 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.617016 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.617031 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.617042 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:15Z","lastTransitionTime":"2025-10-06T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.720592 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.720632 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.720640 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.720672 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.720696 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:15Z","lastTransitionTime":"2025-10-06T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.823555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.823596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.823606 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.823622 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.823632 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:15Z","lastTransitionTime":"2025-10-06T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.926127 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.926186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.926198 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.926219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:15 crc kubenswrapper[4769]: I1006 07:17:15.926234 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:15Z","lastTransitionTime":"2025-10-06T07:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.028949 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.028983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.028993 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.029008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.029018 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:16Z","lastTransitionTime":"2025-10-06T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.131251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.131313 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.131330 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.131353 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.131367 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:16Z","lastTransitionTime":"2025-10-06T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.233898 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.233965 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.233985 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.234012 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.234031 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:16Z","lastTransitionTime":"2025-10-06T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.336935 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.336975 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.336986 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.336999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.337008 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:16Z","lastTransitionTime":"2025-10-06T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.439577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.439627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.439642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.439662 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.439678 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:16Z","lastTransitionTime":"2025-10-06T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.541614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.541662 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.541671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.541685 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.541694 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:16Z","lastTransitionTime":"2025-10-06T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.644582 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.644628 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.644639 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.644659 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.644671 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:16Z","lastTransitionTime":"2025-10-06T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.747102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.747160 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.747171 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.747188 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.747200 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:16Z","lastTransitionTime":"2025-10-06T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.849554 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.849591 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.849599 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.849615 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.849627 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:16Z","lastTransitionTime":"2025-10-06T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.951754 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.951818 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.951841 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.951868 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:16 crc kubenswrapper[4769]: I1006 07:17:16.951891 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:16Z","lastTransitionTime":"2025-10-06T07:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.101651 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.101709 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.101726 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.101749 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.101766 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:17Z","lastTransitionTime":"2025-10-06T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.165852 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.165869 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.165963 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:17 crc kubenswrapper[4769]: E1006 07:17:17.166025 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:17 crc kubenswrapper[4769]: E1006 07:17:17.165989 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:17 crc kubenswrapper[4769]: E1006 07:17:17.166192 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.204093 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.204131 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.204144 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.204158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.204169 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:17Z","lastTransitionTime":"2025-10-06T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.307708 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.307742 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.307757 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.307778 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.307794 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:17Z","lastTransitionTime":"2025-10-06T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.410137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.410184 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.410198 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.410215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.410230 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:17Z","lastTransitionTime":"2025-10-06T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.512878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.512916 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.512927 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.512941 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.512950 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:17Z","lastTransitionTime":"2025-10-06T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.615607 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.615681 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.615695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.615712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.615728 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:17Z","lastTransitionTime":"2025-10-06T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.718266 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.718513 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.718527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.718549 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.718564 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:17Z","lastTransitionTime":"2025-10-06T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.820775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.820830 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.820843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.820860 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.820874 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:17Z","lastTransitionTime":"2025-10-06T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.923672 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.923696 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.923706 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.923720 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:17 crc kubenswrapper[4769]: I1006 07:17:17.923730 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:17Z","lastTransitionTime":"2025-10-06T07:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.026047 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.026077 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.026086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.026100 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.026109 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:18Z","lastTransitionTime":"2025-10-06T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.128861 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.128898 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.128906 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.128921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.128930 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:18Z","lastTransitionTime":"2025-10-06T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.231023 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.231057 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.231066 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.231080 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.231089 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:18Z","lastTransitionTime":"2025-10-06T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.332915 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.332951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.332961 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.332974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.332983 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:18Z","lastTransitionTime":"2025-10-06T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.386213 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/0.log" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.389317 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf" exitCode=1 Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.389370 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf"} Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.390149 4769 scope.go:117] "RemoveContainer" containerID="18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.403192 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.414650 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.425998 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.435449 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.435487 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.435497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.435511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.435521 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:18Z","lastTransitionTime":"2025-10-06T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.439082 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.451395 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.477486 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:17Z\\\",\\\"message\\\":\\\"wall event handler 9 for removal\\\\nI1006 07:17:17.505410 6059 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 07:17:17.505450 6059 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 07:17:17.505475 6059 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 07:17:17.505489 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1006 07:17:17.505497 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 07:17:17.505725 6059 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 07:17:17.505894 6059 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 07:17:17.506222 6059 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1006 07:17:17.506256 6059 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1006 07:17:17.506282 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 07:17:17.506301 6059 factory.go:656] Stopping watch factory\\\\nI1006 07:17:17.506317 6059 ovnkube.go:599] Stopped ovnkube\\\\nI1006 07:17:17.506369 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 07:17:17.506381 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.489412 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.499016 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.513017 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.524191 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.537916 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.537967 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.537985 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.538010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.538024 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:18Z","lastTransitionTime":"2025-10-06T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.539054 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.553647 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.566442 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.583058 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.641033 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.641070 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.641079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.641093 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.641103 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:18Z","lastTransitionTime":"2025-10-06T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.743698 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.743731 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.743740 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.743759 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.743768 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:18Z","lastTransitionTime":"2025-10-06T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.845892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.845931 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.845959 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.845975 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.845986 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:18Z","lastTransitionTime":"2025-10-06T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.948502 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.948561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.948569 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.948585 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.948596 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:18Z","lastTransitionTime":"2025-10-06T07:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.967836 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj"] Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.968327 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.969919 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.970255 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.980892 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:18 crc kubenswrapper[4769]: I1006 07:17:18.990941 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:18Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.001955 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.015674 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.027278 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.041050 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.051240 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.051300 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.051314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.051338 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.051356 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:19Z","lastTransitionTime":"2025-10-06T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.055259 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.066721 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.085073 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:17Z\\\",\\\"message\\\":\\\"wall event handler 9 for removal\\\\nI1006 07:17:17.505410 6059 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 07:17:17.505450 6059 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 07:17:17.505475 6059 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 07:17:17.505489 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1006 07:17:17.505497 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 07:17:17.505725 6059 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 07:17:17.505894 6059 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 07:17:17.506222 6059 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1006 07:17:17.506256 6059 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1006 07:17:17.506282 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 07:17:17.506301 6059 factory.go:656] Stopping watch factory\\\\nI1006 07:17:17.506317 6059 ovnkube.go:599] Stopped ovnkube\\\\nI1006 07:17:17.506369 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 07:17:17.506381 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.095017 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.107116 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.115113 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.127756 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.135161 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64whz\" (UniqueName: \"kubernetes.io/projected/e8bd0044-3318-436a-bd7f-f1e0268a30e6-kube-api-access-64whz\") pod \"ovnkube-control-plane-749d76644c-882lj\" (UID: \"e8bd0044-3318-436a-bd7f-f1e0268a30e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.135234 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bd0044-3318-436a-bd7f-f1e0268a30e6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-882lj\" (UID: \"e8bd0044-3318-436a-bd7f-f1e0268a30e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.135279 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bd0044-3318-436a-bd7f-f1e0268a30e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-882lj\" (UID: \"e8bd0044-3318-436a-bd7f-f1e0268a30e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.135328 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bd0044-3318-436a-bd7f-f1e0268a30e6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-882lj\" (UID: \"e8bd0044-3318-436a-bd7f-f1e0268a30e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.137288 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.148584 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:19Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.154470 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.154509 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.154522 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.154539 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.154551 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:19Z","lastTransitionTime":"2025-10-06T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.165710 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.165751 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.165723 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:19 crc kubenswrapper[4769]: E1006 07:17:19.165826 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:19 crc kubenswrapper[4769]: E1006 07:17:19.165882 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:19 crc kubenswrapper[4769]: E1006 07:17:19.165972 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.235774 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64whz\" (UniqueName: \"kubernetes.io/projected/e8bd0044-3318-436a-bd7f-f1e0268a30e6-kube-api-access-64whz\") pod \"ovnkube-control-plane-749d76644c-882lj\" (UID: \"e8bd0044-3318-436a-bd7f-f1e0268a30e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.235828 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bd0044-3318-436a-bd7f-f1e0268a30e6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-882lj\" (UID: \"e8bd0044-3318-436a-bd7f-f1e0268a30e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.235849 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bd0044-3318-436a-bd7f-f1e0268a30e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-882lj\" (UID: \"e8bd0044-3318-436a-bd7f-f1e0268a30e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.235875 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bd0044-3318-436a-bd7f-f1e0268a30e6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-882lj\" (UID: \"e8bd0044-3318-436a-bd7f-f1e0268a30e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.236392 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bd0044-3318-436a-bd7f-f1e0268a30e6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-882lj\" (UID: \"e8bd0044-3318-436a-bd7f-f1e0268a30e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.236681 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bd0044-3318-436a-bd7f-f1e0268a30e6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-882lj\" (UID: \"e8bd0044-3318-436a-bd7f-f1e0268a30e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.241463 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bd0044-3318-436a-bd7f-f1e0268a30e6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-882lj\" (UID: \"e8bd0044-3318-436a-bd7f-f1e0268a30e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.252761 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64whz\" (UniqueName: \"kubernetes.io/projected/e8bd0044-3318-436a-bd7f-f1e0268a30e6-kube-api-access-64whz\") pod \"ovnkube-control-plane-749d76644c-882lj\" (UID: \"e8bd0044-3318-436a-bd7f-f1e0268a30e6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.257310 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.257344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.257354 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.257437 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.257451 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:19Z","lastTransitionTime":"2025-10-06T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.280648 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" Oct 06 07:17:19 crc kubenswrapper[4769]: W1006 07:17:19.294791 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8bd0044_3318_436a_bd7f_f1e0268a30e6.slice/crio-1a1d7ac2b6be26867113985a5186874d355245d6d6be3c4e35d600755090382f WatchSource:0}: Error finding container 1a1d7ac2b6be26867113985a5186874d355245d6d6be3c4e35d600755090382f: Status 404 returned error can't find the container with id 1a1d7ac2b6be26867113985a5186874d355245d6d6be3c4e35d600755090382f Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.359074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.359115 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.359127 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.359144 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.359155 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:19Z","lastTransitionTime":"2025-10-06T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.392835 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" event={"ID":"e8bd0044-3318-436a-bd7f-f1e0268a30e6","Type":"ContainerStarted","Data":"1a1d7ac2b6be26867113985a5186874d355245d6d6be3c4e35d600755090382f"} Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.462098 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.462139 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.462154 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.462170 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.462181 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:19Z","lastTransitionTime":"2025-10-06T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.564982 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.565017 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.565029 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.565043 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.565055 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:19Z","lastTransitionTime":"2025-10-06T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.667688 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.667731 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.667742 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.667779 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.667789 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:19Z","lastTransitionTime":"2025-10-06T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.770217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.770311 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.770326 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.770351 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.770365 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:19Z","lastTransitionTime":"2025-10-06T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.873395 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.873488 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.873509 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.873534 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.873549 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:19Z","lastTransitionTime":"2025-10-06T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.977345 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.977414 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.977492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.977534 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:19 crc kubenswrapper[4769]: I1006 07:17:19.977562 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:19Z","lastTransitionTime":"2025-10-06T07:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.071453 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-wxwxs"] Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.071892 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.071952 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.080848 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.080893 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.080902 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.080919 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.080928 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:20Z","lastTransitionTime":"2025-10-06T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.086086 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.097595 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.110550 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.121249 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.132763 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.149325 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.163269 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.181902 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.183203 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.183237 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.183248 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.183263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.183273 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:20Z","lastTransitionTime":"2025-10-06T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.194974 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.208592 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.220301 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.230025 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.241396 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.244647 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.244771 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2c48\" (UniqueName: \"kubernetes.io/projected/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-kube-api-access-p2c48\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.244847 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:17:36.244822304 +0000 UTC m=+52.769103451 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.244981 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.253249 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.271838 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:17Z\\\",\\\"message\\\":\\\"wall event handler 9 for removal\\\\nI1006 07:17:17.505410 6059 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 07:17:17.505450 6059 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 07:17:17.505475 6059 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 07:17:17.505489 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1006 07:17:17.505497 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 07:17:17.505725 6059 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 07:17:17.505894 6059 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 07:17:17.506222 6059 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1006 07:17:17.506256 6059 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1006 07:17:17.506282 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 07:17:17.506301 6059 factory.go:656] Stopping watch factory\\\\nI1006 07:17:17.506317 6059 ovnkube.go:599] Stopped ovnkube\\\\nI1006 07:17:17.506369 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 07:17:17.506381 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.282024 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:20Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.285706 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.285743 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.285752 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.285765 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.285774 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:20Z","lastTransitionTime":"2025-10-06T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.346285 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.346338 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.346374 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2c48\" (UniqueName: \"kubernetes.io/projected/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-kube-api-access-p2c48\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.346402 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.346444 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.346468 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346532 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346582 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346603 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346532 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346651 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:36.346600289 +0000 UTC m=+52.870881436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346694 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:36.346680282 +0000 UTC m=+52.870961529 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346702 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346713 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs podName:cbddd0e8-9d17-4278-acdc-e35d2d8d70f9 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:20.846704312 +0000 UTC m=+37.370985589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs") pod "network-metrics-daemon-wxwxs" (UID: "cbddd0e8-9d17-4278-acdc-e35d2d8d70f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346714 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346653 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346730 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346739 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346779 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:36.346762794 +0000 UTC m=+52.871044041 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.346798 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:36.346790345 +0000 UTC m=+52.871071622 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.362278 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2c48\" (UniqueName: \"kubernetes.io/projected/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-kube-api-access-p2c48\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.387975 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.388016 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.388026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.388040 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.388049 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:20Z","lastTransitionTime":"2025-10-06T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.396730 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/0.log" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.399507 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a"} Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.490735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.490779 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.490789 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.490803 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.490813 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:20Z","lastTransitionTime":"2025-10-06T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.593158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.593192 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.593200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.593216 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.593225 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:20Z","lastTransitionTime":"2025-10-06T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.695532 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.695584 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.695596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.695614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.695625 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:20Z","lastTransitionTime":"2025-10-06T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.797886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.797916 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.797924 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.797936 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.797945 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:20Z","lastTransitionTime":"2025-10-06T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.852029 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.852175 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:20 crc kubenswrapper[4769]: E1006 07:17:20.852250 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs podName:cbddd0e8-9d17-4278-acdc-e35d2d8d70f9 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:21.852218803 +0000 UTC m=+38.376499950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs") pod "network-metrics-daemon-wxwxs" (UID: "cbddd0e8-9d17-4278-acdc-e35d2d8d70f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.900346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.900379 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.900389 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.900404 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:20 crc kubenswrapper[4769]: I1006 07:17:20.900414 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:20Z","lastTransitionTime":"2025-10-06T07:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.002529 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.002568 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.002578 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.002590 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.002598 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.105509 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.105551 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.105565 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.105582 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.105595 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.141269 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.141303 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.141311 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.141327 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.141337 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: E1006 07:17:21.153748 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.157054 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.157094 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.157104 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.157119 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.157127 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.165716 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.165749 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.165756 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:21 crc kubenswrapper[4769]: E1006 07:17:21.165845 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:21 crc kubenswrapper[4769]: E1006 07:17:21.165962 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:21 crc kubenswrapper[4769]: E1006 07:17:21.166050 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:21 crc kubenswrapper[4769]: E1006 07:17:21.173475 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.177662 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.177711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.177720 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.177740 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.177755 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: E1006 07:17:21.188228 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.191765 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.191803 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.191811 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.191824 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.191832 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: E1006 07:17:21.204302 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.207242 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.207272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.207281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.207294 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.207304 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: E1006 07:17:21.218647 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: E1006 07:17:21.218750 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.219704 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.219729 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.219739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.219749 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.219758 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.322490 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.322543 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.322555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.322580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.322594 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.412262 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/1.log" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.413076 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/0.log" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.419314 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a" exitCode=1 Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.419394 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.419497 4769 scope.go:117] "RemoveContainer" containerID="18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.420344 4769 scope.go:117] "RemoveContainer" containerID="032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a" Oct 06 07:17:21 crc kubenswrapper[4769]: E1006 07:17:21.420645 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.421072 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" event={"ID":"e8bd0044-3318-436a-bd7f-f1e0268a30e6","Type":"ContainerStarted","Data":"ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.424432 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.424464 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.424474 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.424488 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.424498 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.434175 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.446223 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.459111 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.472276 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.484550 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.500015 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.512105 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.526149 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.526379 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.526474 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.526556 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.526624 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.564803 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.577795 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.596826 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18facf1ca94a4751de3907e4a348e107e331efdac7e4f9ed59a83aeade84edaf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:17Z\\\",\\\"message\\\":\\\"wall event handler 9 for removal\\\\nI1006 07:17:17.505410 6059 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1006 07:17:17.505450 6059 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1006 07:17:17.505475 6059 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1006 07:17:17.505489 6059 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1006 07:17:17.505497 6059 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1006 07:17:17.505725 6059 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1006 07:17:17.505894 6059 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1006 07:17:17.506222 6059 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1006 07:17:17.506256 6059 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1006 07:17:17.506282 6059 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1006 07:17:17.506301 6059 factory.go:656] Stopping watch factory\\\\nI1006 07:17:17.506317 6059 ovnkube.go:599] Stopped ovnkube\\\\nI1006 07:17:17.506369 6059 handler.go:208] Removed *v1.Node event handler 2\\\\nI1006 07:17:17.506381 6059 handler.go:208] Removed *v1.Node event handler 7\\\\nI1006 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1006 07:17:21.128642 6227 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-wxwxs in node crc\\\\nI1006 07:17:21.128643 6227 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj\\\\nI1006 07:17:21.128443 6227 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-b48h2\\\\nF1006 07:17:21.128666 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.610199 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.624398 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.628826 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.628861 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.628871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.628888 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.628900 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.634852 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.655536 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.669617 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.683922 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:21Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.732818 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.732916 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.732946 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.732977 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.732999 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.835340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.835454 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.835483 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.835518 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.835544 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.860983 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:21 crc kubenswrapper[4769]: E1006 07:17:21.861256 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:21 crc kubenswrapper[4769]: E1006 07:17:21.861480 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs podName:cbddd0e8-9d17-4278-acdc-e35d2d8d70f9 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:23.861397912 +0000 UTC m=+40.385679069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs") pod "network-metrics-daemon-wxwxs" (UID: "cbddd0e8-9d17-4278-acdc-e35d2d8d70f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.940258 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.940853 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.940864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.940882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:21 crc kubenswrapper[4769]: I1006 07:17:21.940892 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:21Z","lastTransitionTime":"2025-10-06T07:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.044074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.044115 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.044132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.044152 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.044167 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:22Z","lastTransitionTime":"2025-10-06T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.146964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.147034 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.147052 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.147082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.147108 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:22Z","lastTransitionTime":"2025-10-06T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.208015 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:22 crc kubenswrapper[4769]: E1006 07:17:22.208234 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.249865 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.249908 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.249918 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.249931 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.249940 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:22Z","lastTransitionTime":"2025-10-06T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.352029 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.352062 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.352072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.352086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.352095 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:22Z","lastTransitionTime":"2025-10-06T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.426395 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/1.log" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.430388 4769 scope.go:117] "RemoveContainer" containerID="032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a" Oct 06 07:17:22 crc kubenswrapper[4769]: E1006 07:17:22.430947 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.431033 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" event={"ID":"e8bd0044-3318-436a-bd7f-f1e0268a30e6","Type":"ContainerStarted","Data":"022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75"} Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.447607 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.454658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.454721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.454739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.454776 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.454796 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:22Z","lastTransitionTime":"2025-10-06T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.462715 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.479392 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.500330 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.517191 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.536464 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1006 07:17:21.128642 6227 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-wxwxs in node crc\\\\nI1006 07:17:21.128643 6227 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj\\\\nI1006 07:17:21.128443 6227 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-b48h2\\\\nF1006 07:17:21.128666 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.549084 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.557649 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.557735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.557761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.557795 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.557824 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:22Z","lastTransitionTime":"2025-10-06T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.579945 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.603553 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.629065 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.640075 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.650452 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.670082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.670123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.670131 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.670146 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.670157 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:22Z","lastTransitionTime":"2025-10-06T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.673364 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.685083 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.699150 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.713146 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.735773 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.747773 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.765179 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.772919 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.772949 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.772957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.772971 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.772980 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:22Z","lastTransitionTime":"2025-10-06T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.777237 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.791170 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.803652 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.816602 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.827837 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.836908 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.851508 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.864894 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.874924 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.876235 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.876276 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.876291 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.876317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.876335 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:22Z","lastTransitionTime":"2025-10-06T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.888168 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.899471 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.919965 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1006 07:17:21.128642 6227 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-wxwxs in node crc\\\\nI1006 07:17:21.128643 6227 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj\\\\nI1006 07:17:21.128443 6227 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-b48h2\\\\nF1006 07:17:21.128666 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.933140 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:22Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.979512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.979582 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.979606 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.979638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:22 crc kubenswrapper[4769]: I1006 07:17:22.979667 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:22Z","lastTransitionTime":"2025-10-06T07:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.082481 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.082525 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.082536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.082550 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.082559 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:23Z","lastTransitionTime":"2025-10-06T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.165553 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.165617 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.165553 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:23 crc kubenswrapper[4769]: E1006 07:17:23.165801 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:23 crc kubenswrapper[4769]: E1006 07:17:23.165985 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:23 crc kubenswrapper[4769]: E1006 07:17:23.166149 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.185637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.185710 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.185734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.185767 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.185792 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:23Z","lastTransitionTime":"2025-10-06T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.288619 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.288684 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.288695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.288708 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.288717 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:23Z","lastTransitionTime":"2025-10-06T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.392598 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.392755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.392790 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.392878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.392922 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:23Z","lastTransitionTime":"2025-10-06T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.495993 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.496027 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.496038 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.496054 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.496065 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:23Z","lastTransitionTime":"2025-10-06T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.597938 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.597980 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.597991 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.598009 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.598022 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:23Z","lastTransitionTime":"2025-10-06T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.701360 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.701440 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.701459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.701483 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.701496 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:23Z","lastTransitionTime":"2025-10-06T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.803919 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.803994 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.804007 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.804026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.804038 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:23Z","lastTransitionTime":"2025-10-06T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.906717 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.906806 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.906822 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.906841 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.906854 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:23Z","lastTransitionTime":"2025-10-06T07:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:23 crc kubenswrapper[4769]: I1006 07:17:23.928108 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:23 crc kubenswrapper[4769]: E1006 07:17:23.928350 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:23 crc kubenswrapper[4769]: E1006 07:17:23.928479 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs podName:cbddd0e8-9d17-4278-acdc-e35d2d8d70f9 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:27.928449951 +0000 UTC m=+44.452731138 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs") pod "network-metrics-daemon-wxwxs" (UID: "cbddd0e8-9d17-4278-acdc-e35d2d8d70f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.009934 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.009988 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.010007 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.010035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.010053 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:24Z","lastTransitionTime":"2025-10-06T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.112455 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.112520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.112537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.112562 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.112579 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:24Z","lastTransitionTime":"2025-10-06T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.165471 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:24 crc kubenswrapper[4769]: E1006 07:17:24.165607 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.176474 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.188180 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.198984 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.215078 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1006 07:17:21.128642 6227 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-wxwxs in node crc\\\\nI1006 07:17:21.128643 6227 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj\\\\nI1006 07:17:21.128443 6227 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-b48h2\\\\nF1006 07:17:21.128666 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.215246 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.215397 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.215410 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.215442 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.215456 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:24Z","lastTransitionTime":"2025-10-06T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.225188 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.238447 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.251110 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.263979 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.275117 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.284190 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.296333 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.307630 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.317441 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.317481 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.317492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.317508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.317522 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:24Z","lastTransitionTime":"2025-10-06T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.319522 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.328681 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.342107 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.354596 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.419641 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.419676 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.419684 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.419696 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.419706 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:24Z","lastTransitionTime":"2025-10-06T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.521989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.522021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.522030 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.522044 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.522054 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:24Z","lastTransitionTime":"2025-10-06T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.624353 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.624393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.624402 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.624416 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.624445 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:24Z","lastTransitionTime":"2025-10-06T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.726654 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.726771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.726782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.726797 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.726808 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:24Z","lastTransitionTime":"2025-10-06T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.829016 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.829065 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.829082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.829097 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.829107 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:24Z","lastTransitionTime":"2025-10-06T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.931725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.931760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.931770 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.931790 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:24 crc kubenswrapper[4769]: I1006 07:17:24.931802 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:24Z","lastTransitionTime":"2025-10-06T07:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.033660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.034143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.034215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.034240 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.034594 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:25Z","lastTransitionTime":"2025-10-06T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.137112 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.137362 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.137529 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.137630 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.137714 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:25Z","lastTransitionTime":"2025-10-06T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.170283 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.170327 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.170289 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:25 crc kubenswrapper[4769]: E1006 07:17:25.170415 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:25 crc kubenswrapper[4769]: E1006 07:17:25.170509 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:25 crc kubenswrapper[4769]: E1006 07:17:25.170688 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.240450 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.240494 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.240505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.240522 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.240534 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:25Z","lastTransitionTime":"2025-10-06T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.342240 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.342541 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.342627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.342715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.342912 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:25Z","lastTransitionTime":"2025-10-06T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.445201 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.445255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.445265 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.445279 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.445288 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:25Z","lastTransitionTime":"2025-10-06T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.547325 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.547381 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.547391 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.547406 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.547448 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:25Z","lastTransitionTime":"2025-10-06T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.651064 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.651134 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.651148 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.651173 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.651187 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:25Z","lastTransitionTime":"2025-10-06T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.754234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.754291 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.754302 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.754339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.754355 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:25Z","lastTransitionTime":"2025-10-06T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.857307 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.857347 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.857357 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.857370 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.857379 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:25Z","lastTransitionTime":"2025-10-06T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.960789 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.960844 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.960857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.960875 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:25 crc kubenswrapper[4769]: I1006 07:17:25.960887 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:25Z","lastTransitionTime":"2025-10-06T07:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.063705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.063747 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.063757 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.063772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.063784 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:26Z","lastTransitionTime":"2025-10-06T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.164961 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:26 crc kubenswrapper[4769]: E1006 07:17:26.165088 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.165629 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.165687 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.165706 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.165729 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.165747 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:26Z","lastTransitionTime":"2025-10-06T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.268300 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.268362 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.268371 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.268399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.268411 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:26Z","lastTransitionTime":"2025-10-06T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.371473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.371505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.371513 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.371527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.371536 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:26Z","lastTransitionTime":"2025-10-06T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.473458 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.473497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.473505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.473518 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.473526 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:26Z","lastTransitionTime":"2025-10-06T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.575399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.575483 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.575502 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.575520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.575529 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:26Z","lastTransitionTime":"2025-10-06T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.677936 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.678001 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.678013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.678025 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.678034 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:26Z","lastTransitionTime":"2025-10-06T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.780460 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.780499 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.780508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.780524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.780535 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:26Z","lastTransitionTime":"2025-10-06T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.883044 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.883090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.883105 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.883125 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.883140 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:26Z","lastTransitionTime":"2025-10-06T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.986197 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.986277 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.986301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.986329 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:26 crc kubenswrapper[4769]: I1006 07:17:26.986349 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:26Z","lastTransitionTime":"2025-10-06T07:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.089080 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.089185 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.089218 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.089249 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.089271 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:27Z","lastTransitionTime":"2025-10-06T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.165193 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.165193 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:27 crc kubenswrapper[4769]: E1006 07:17:27.165596 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.165278 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:27 crc kubenswrapper[4769]: E1006 07:17:27.165728 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:27 crc kubenswrapper[4769]: E1006 07:17:27.165869 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.192508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.192573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.192591 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.192614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.192631 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:27Z","lastTransitionTime":"2025-10-06T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.296265 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.296324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.296340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.296362 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.296378 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:27Z","lastTransitionTime":"2025-10-06T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.399896 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.399966 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.399987 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.400024 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.400046 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:27Z","lastTransitionTime":"2025-10-06T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.503013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.503051 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.503059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.503073 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.503082 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:27Z","lastTransitionTime":"2025-10-06T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.606794 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.606856 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.606868 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.606890 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.606911 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:27Z","lastTransitionTime":"2025-10-06T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.709608 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.709645 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.709657 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.709673 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.709687 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:27Z","lastTransitionTime":"2025-10-06T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.812049 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.812083 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.812091 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.812102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.812113 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:27Z","lastTransitionTime":"2025-10-06T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.915414 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.915472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.915513 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.915529 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.915539 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:27Z","lastTransitionTime":"2025-10-06T07:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:27 crc kubenswrapper[4769]: I1006 07:17:27.971309 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:27 crc kubenswrapper[4769]: E1006 07:17:27.971480 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:27 crc kubenswrapper[4769]: E1006 07:17:27.971531 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs podName:cbddd0e8-9d17-4278-acdc-e35d2d8d70f9 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:35.971516108 +0000 UTC m=+52.495797255 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs") pod "network-metrics-daemon-wxwxs" (UID: "cbddd0e8-9d17-4278-acdc-e35d2d8d70f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.018439 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.018473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.018489 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.018506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.018515 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:28Z","lastTransitionTime":"2025-10-06T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.120974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.121033 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.121055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.121083 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.121104 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:28Z","lastTransitionTime":"2025-10-06T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.165214 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:28 crc kubenswrapper[4769]: E1006 07:17:28.165394 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.224043 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.224072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.224080 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.224093 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.224104 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:28Z","lastTransitionTime":"2025-10-06T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.327235 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.327283 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.327300 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.327324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.327343 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:28Z","lastTransitionTime":"2025-10-06T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.429986 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.430046 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.430058 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.430078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.430089 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:28Z","lastTransitionTime":"2025-10-06T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.532831 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.532886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.532898 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.532916 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.532926 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:28Z","lastTransitionTime":"2025-10-06T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.636116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.636192 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.636207 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.636231 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.636250 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:28Z","lastTransitionTime":"2025-10-06T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.739399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.739481 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.739498 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.739522 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.739538 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:28Z","lastTransitionTime":"2025-10-06T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.842922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.843007 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.843031 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.843066 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.843088 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:28Z","lastTransitionTime":"2025-10-06T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.946062 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.946119 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.946132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.946151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:28 crc kubenswrapper[4769]: I1006 07:17:28.946165 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:28Z","lastTransitionTime":"2025-10-06T07:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.048978 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.049044 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.049061 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.049087 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.049103 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:29Z","lastTransitionTime":"2025-10-06T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.154272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.154344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.154354 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.154372 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.154383 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:29Z","lastTransitionTime":"2025-10-06T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.165056 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.165063 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:29 crc kubenswrapper[4769]: E1006 07:17:29.165263 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.165087 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:29 crc kubenswrapper[4769]: E1006 07:17:29.165325 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:29 crc kubenswrapper[4769]: E1006 07:17:29.165502 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.258381 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.258414 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.258444 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.258461 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.258472 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:29Z","lastTransitionTime":"2025-10-06T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.361846 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.361889 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.361898 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.361912 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.361921 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:29Z","lastTransitionTime":"2025-10-06T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.464575 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.464673 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.464692 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.464724 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.464743 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:29Z","lastTransitionTime":"2025-10-06T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.569538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.569561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.569568 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.569581 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.569589 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:29Z","lastTransitionTime":"2025-10-06T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.672556 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.672643 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.672660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.672688 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.672706 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:29Z","lastTransitionTime":"2025-10-06T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.775290 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.775391 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.775451 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.775481 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.775511 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:29Z","lastTransitionTime":"2025-10-06T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.878498 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.878549 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.878563 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.878580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.878592 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:29Z","lastTransitionTime":"2025-10-06T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.981822 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.981890 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.981914 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.981944 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:29 crc kubenswrapper[4769]: I1006 07:17:29.981966 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:29Z","lastTransitionTime":"2025-10-06T07:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.085142 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.085201 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.085209 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.085224 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.085236 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:30Z","lastTransitionTime":"2025-10-06T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.165259 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:30 crc kubenswrapper[4769]: E1006 07:17:30.165415 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.188232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.188276 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.188285 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.188298 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.188307 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:30Z","lastTransitionTime":"2025-10-06T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.290675 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.290742 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.290760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.290801 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.290821 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:30Z","lastTransitionTime":"2025-10-06T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.393047 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.393092 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.393104 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.393117 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.393126 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:30Z","lastTransitionTime":"2025-10-06T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.495720 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.495760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.495771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.495790 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.495803 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:30Z","lastTransitionTime":"2025-10-06T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.598302 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.598339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.598348 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.598364 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.598373 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:30Z","lastTransitionTime":"2025-10-06T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.701545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.701602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.701616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.701639 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.701655 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:30Z","lastTransitionTime":"2025-10-06T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.805049 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.805086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.805097 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.805110 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.805119 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:30Z","lastTransitionTime":"2025-10-06T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.907770 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.907812 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.907824 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.907839 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:30 crc kubenswrapper[4769]: I1006 07:17:30.907851 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:30Z","lastTransitionTime":"2025-10-06T07:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.010802 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.010870 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.010888 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.010912 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.010929 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.114721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.114761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.114772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.114788 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.114800 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.165355 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.165391 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.165456 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:31 crc kubenswrapper[4769]: E1006 07:17:31.165521 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:31 crc kubenswrapper[4769]: E1006 07:17:31.165606 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:31 crc kubenswrapper[4769]: E1006 07:17:31.165686 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.216283 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.216346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.216360 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.216375 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.216407 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.319131 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.319217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.319232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.319249 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.319259 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.421370 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.421453 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.421466 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.421485 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.421498 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.506141 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.506174 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.506184 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.506199 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.506207 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: E1006 07:17:31.523057 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:31Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.526129 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.526165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.526174 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.526186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.526195 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: E1006 07:17:31.538996 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:31Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.542762 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.542792 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.542805 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.542821 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.542831 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: E1006 07:17:31.553585 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:31Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.556175 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.556241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.556250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.556264 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.556274 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: E1006 07:17:31.567114 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:31Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.570297 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.570344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.570353 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.570368 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.570378 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: E1006 07:17:31.583655 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:31Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:31 crc kubenswrapper[4769]: E1006 07:17:31.583835 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.585389 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.585443 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.585455 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.585468 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.585478 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.687944 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.687996 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.688013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.688035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.688052 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.790108 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.790184 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.790206 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.790232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.790255 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.893812 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.893889 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.893912 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.893943 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.893967 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.996373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.996443 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.996455 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.996471 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:31 crc kubenswrapper[4769]: I1006 07:17:31.996481 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:31Z","lastTransitionTime":"2025-10-06T07:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.098893 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.098943 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.098954 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.098971 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.098984 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:32Z","lastTransitionTime":"2025-10-06T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.165775 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:32 crc kubenswrapper[4769]: E1006 07:17:32.165963 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.201177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.201215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.201223 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.201237 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.201250 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:32Z","lastTransitionTime":"2025-10-06T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.303605 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.303675 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.303729 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.303754 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.303769 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:32Z","lastTransitionTime":"2025-10-06T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.405612 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.405674 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.405692 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.405717 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.405734 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:32Z","lastTransitionTime":"2025-10-06T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.508245 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.508316 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.508334 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.508352 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.508363 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:32Z","lastTransitionTime":"2025-10-06T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.611006 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.611078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.611112 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.611140 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.611179 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:32Z","lastTransitionTime":"2025-10-06T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.713459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.713506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.713523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.713543 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.713558 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:32Z","lastTransitionTime":"2025-10-06T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.815691 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.815743 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.815756 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.815774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.815786 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:32Z","lastTransitionTime":"2025-10-06T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.918664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.918717 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.918731 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.918750 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:32 crc kubenswrapper[4769]: I1006 07:17:32.918765 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:32Z","lastTransitionTime":"2025-10-06T07:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.021460 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.021511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.021523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.021542 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.021555 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:33Z","lastTransitionTime":"2025-10-06T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.123998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.124041 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.124052 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.124092 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.124107 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:33Z","lastTransitionTime":"2025-10-06T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.165791 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.165843 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:33 crc kubenswrapper[4769]: E1006 07:17:33.165901 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.165852 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:33 crc kubenswrapper[4769]: E1006 07:17:33.165978 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:33 crc kubenswrapper[4769]: E1006 07:17:33.166028 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.226685 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.226721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.226732 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.226750 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.226761 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:33Z","lastTransitionTime":"2025-10-06T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.330030 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.330079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.330091 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.330109 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.330121 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:33Z","lastTransitionTime":"2025-10-06T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.433017 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.433050 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.433068 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.433086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.433097 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:33Z","lastTransitionTime":"2025-10-06T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.535289 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.535361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.535373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.535392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.535404 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:33Z","lastTransitionTime":"2025-10-06T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.639399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.639469 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.639482 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.639500 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.639511 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:33Z","lastTransitionTime":"2025-10-06T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.701997 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.703400 4769 scope.go:117] "RemoveContainer" containerID="032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.741801 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.741846 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.741859 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.741875 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.741886 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:33Z","lastTransitionTime":"2025-10-06T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.843340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.843657 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.843668 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.843684 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.843699 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:33Z","lastTransitionTime":"2025-10-06T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.945298 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.945336 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.945348 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.945366 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:33 crc kubenswrapper[4769]: I1006 07:17:33.945376 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:33Z","lastTransitionTime":"2025-10-06T07:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.047637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.047685 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.047699 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.047718 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.047735 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:34Z","lastTransitionTime":"2025-10-06T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.151092 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.151140 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.151151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.151170 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.151184 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:34Z","lastTransitionTime":"2025-10-06T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.165755 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:34 crc kubenswrapper[4769]: E1006 07:17:34.165932 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.182184 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.211473 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.224836 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.247609 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.253877 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.253931 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.253945 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.253969 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.253982 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:34Z","lastTransitionTime":"2025-10-06T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.261765 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.273798 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.286305 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.299293 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.311719 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.324569 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.339465 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.355599 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.356404 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.356462 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.356488 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.356516 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.356528 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:34Z","lastTransitionTime":"2025-10-06T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.375542 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1006 07:17:21.128642 6227 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-wxwxs in node crc\\\\nI1006 07:17:21.128643 6227 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj\\\\nI1006 07:17:21.128443 6227 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-b48h2\\\\nF1006 07:17:21.128666 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.391943 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.403557 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.416757 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.458881 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.458921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.458931 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.458946 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.458984 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:34Z","lastTransitionTime":"2025-10-06T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.469520 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/1.log" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.472231 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830"} Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.472987 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.482472 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.495567 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.506448 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.522137 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1006 07:17:21.128642 6227 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-wxwxs in node crc\\\\nI1006 07:17:21.128643 6227 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj\\\\nI1006 07:17:21.128443 6227 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-b48h2\\\\nF1006 07:17:21.128666 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.531918 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.543057 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.554628 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.562205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.562248 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.562261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.562281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.562293 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:34Z","lastTransitionTime":"2025-10-06T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.564991 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.577483 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.586471 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.599993 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.611870 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.625508 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.635943 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.649923 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.660352 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:34Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.664984 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.665028 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.665038 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.665054 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.665065 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:34Z","lastTransitionTime":"2025-10-06T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.766932 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.766975 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.766982 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.766995 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.767005 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:34Z","lastTransitionTime":"2025-10-06T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.869249 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.869284 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.869296 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.869312 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.869323 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:34Z","lastTransitionTime":"2025-10-06T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.971629 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.971689 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.971711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.971738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:34 crc kubenswrapper[4769]: I1006 07:17:34.971760 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:34Z","lastTransitionTime":"2025-10-06T07:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.074475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.074527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.074539 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.074580 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.074592 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:35Z","lastTransitionTime":"2025-10-06T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.165584 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:35 crc kubenswrapper[4769]: E1006 07:17:35.166215 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.165693 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:35 crc kubenswrapper[4769]: E1006 07:17:35.166286 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.165705 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:35 crc kubenswrapper[4769]: E1006 07:17:35.166485 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.177331 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.177383 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.177395 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.177412 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.177449 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:35Z","lastTransitionTime":"2025-10-06T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.280065 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.280098 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.280108 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.280120 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.280129 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:35Z","lastTransitionTime":"2025-10-06T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.308657 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.321472 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.326079 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.341644 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.355436 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.369955 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.382145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.382192 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.382204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.382222 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.382234 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:35Z","lastTransitionTime":"2025-10-06T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.385064 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.432542 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1006 07:17:21.128642 6227 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-wxwxs in node crc\\\\nI1006 07:17:21.128643 6227 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj\\\\nI1006 07:17:21.128443 6227 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-b48h2\\\\nF1006 07:17:21.128666 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.456106 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.469649 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.476390 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/2.log" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.477008 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/1.log" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.479662 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830" exitCode=1 Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.479716 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830"} Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.479802 4769 scope.go:117] "RemoveContainer" containerID="032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.480476 4769 scope.go:117] "RemoveContainer" containerID="6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830" Oct 06 07:17:35 crc kubenswrapper[4769]: E1006 07:17:35.480644 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.484727 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.484758 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.484769 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.484787 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.484798 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:35Z","lastTransitionTime":"2025-10-06T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.485069 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.499517 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.509980 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.521359 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.533064 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.543840 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.557288 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.572376 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.586056 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.587742 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.587832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.587846 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.587868 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.587881 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:35Z","lastTransitionTime":"2025-10-06T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.598742 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e287f5-b174-4596-b1ff-af5378e54fe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.612147 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.625374 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.646233 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://032aafcf96fd38571f6d6968fe0fdcaebc4529053abe287b25605ccd4fe20b4a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:21Z\\\",\\\"message\\\":\\\"\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-operator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1006 07:17:21.128642 6227 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-wxwxs in node crc\\\\nI1006 07:17:21.128643 6227 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj\\\\nI1006 07:17:21.128443 6227 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-b48h2\\\\nF1006 07:17:21.128666 6227 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:34Z\\\",\\\"message\\\":\\\"e-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 07:17:34.570922 6431 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s8l5j\\\\nF1006 07:17:34.573019 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: cu\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.662035 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.682619 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.691371 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.691449 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.691469 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.691492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.691508 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:35Z","lastTransitionTime":"2025-10-06T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.694727 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.703778 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.718054 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.727836 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.741481 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.755211 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.769258 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.780554 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.791481 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.793512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.793542 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.793551 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.793564 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.793574 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:35Z","lastTransitionTime":"2025-10-06T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.804675 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:35Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.896278 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.896313 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.896325 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.896340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.896352 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:35Z","lastTransitionTime":"2025-10-06T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.999266 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.999321 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.999335 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.999361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:35 crc kubenswrapper[4769]: I1006 07:17:35.999375 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:35Z","lastTransitionTime":"2025-10-06T07:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.061943 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.062063 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.062118 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs podName:cbddd0e8-9d17-4278-acdc-e35d2d8d70f9 nodeName:}" failed. No retries permitted until 2025-10-06 07:17:52.062104255 +0000 UTC m=+68.586385402 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs") pod "network-metrics-daemon-wxwxs" (UID: "cbddd0e8-9d17-4278-acdc-e35d2d8d70f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.101925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.102005 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.102028 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.102059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.102081 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:36Z","lastTransitionTime":"2025-10-06T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.165573 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.165778 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.205903 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.205975 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.205997 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.206025 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.206047 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:36Z","lastTransitionTime":"2025-10-06T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.264505 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.264666 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:18:08.264652187 +0000 UTC m=+84.788933334 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.309594 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.309652 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.309666 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.309691 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.309706 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:36Z","lastTransitionTime":"2025-10-06T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.366137 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.366276 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.366351 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.366363 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.366391 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.366475 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:18:08.366449322 +0000 UTC m=+84.890730469 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.366591 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.366602 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.366623 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.366655 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.366678 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.366624 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.366725 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:18:08.366704949 +0000 UTC m=+84.890986126 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.366731 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.366751 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 07:18:08.36673859 +0000 UTC m=+84.891019777 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.366779 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 07:18:08.366768951 +0000 UTC m=+84.891050108 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.412165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.412206 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.412217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.412232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.412243 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:36Z","lastTransitionTime":"2025-10-06T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.487149 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/2.log" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.492473 4769 scope.go:117] "RemoveContainer" containerID="6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830" Oct 06 07:17:36 crc kubenswrapper[4769]: E1006 07:17:36.492753 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.509913 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.515317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.515366 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.515387 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.515415 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.515475 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:36Z","lastTransitionTime":"2025-10-06T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.531707 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.549053 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.576376 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:34Z\\\",\\\"message\\\":\\\"e-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 07:17:34.570922 6431 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s8l5j\\\\nF1006 07:17:34.573019 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: cu\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.593938 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.613224 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.618832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.618889 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.618907 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.618935 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.618957 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:36Z","lastTransitionTime":"2025-10-06T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.641733 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.660728 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.688353 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.713187 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.722530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.722587 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.722608 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.722630 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.722647 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:36Z","lastTransitionTime":"2025-10-06T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.735815 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.756021 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.780747 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.801002 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e287f5-b174-4596-b1ff-af5378e54fe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.824367 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.825216 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.825306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.825354 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.825383 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.825406 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:36Z","lastTransitionTime":"2025-10-06T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.846938 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.867620 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:36Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.928733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.928839 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.928859 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.928883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:36 crc kubenswrapper[4769]: I1006 07:17:36.928901 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:36Z","lastTransitionTime":"2025-10-06T07:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.033018 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.033088 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.033106 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.033132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.033184 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:37Z","lastTransitionTime":"2025-10-06T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.136166 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.136225 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.136243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.136269 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.136288 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:37Z","lastTransitionTime":"2025-10-06T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.165304 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.165470 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:37 crc kubenswrapper[4769]: E1006 07:17:37.165511 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.165310 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:37 crc kubenswrapper[4769]: E1006 07:17:37.165755 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:37 crc kubenswrapper[4769]: E1006 07:17:37.165844 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.240520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.240587 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.240605 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.240630 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.240646 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:37Z","lastTransitionTime":"2025-10-06T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.344109 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.344164 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.344176 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.344195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.344209 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:37Z","lastTransitionTime":"2025-10-06T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.447746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.447843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.447860 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.447889 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.447908 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:37Z","lastTransitionTime":"2025-10-06T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.550919 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.550995 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.551009 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.551028 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.551039 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:37Z","lastTransitionTime":"2025-10-06T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.654867 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.654926 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.654940 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.654964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.654980 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:37Z","lastTransitionTime":"2025-10-06T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.758189 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.758261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.758274 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.758293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.758306 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:37Z","lastTransitionTime":"2025-10-06T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.861370 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.861486 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.861507 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.861536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.861557 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:37Z","lastTransitionTime":"2025-10-06T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.964642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.964736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.964766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.964797 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:37 crc kubenswrapper[4769]: I1006 07:17:37.964820 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:37Z","lastTransitionTime":"2025-10-06T07:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.068471 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.068513 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.068527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.068541 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.068551 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:38Z","lastTransitionTime":"2025-10-06T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.165538 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:38 crc kubenswrapper[4769]: E1006 07:17:38.165766 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.171234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.171263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.171272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.171288 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.171299 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:38Z","lastTransitionTime":"2025-10-06T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.273926 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.273966 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.273974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.273996 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.274006 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:38Z","lastTransitionTime":"2025-10-06T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.377021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.377105 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.377123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.377152 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.377174 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:38Z","lastTransitionTime":"2025-10-06T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.480325 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.480385 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.480401 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.480444 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.480461 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:38Z","lastTransitionTime":"2025-10-06T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.584113 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.584168 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.584183 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.584750 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.584826 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:38Z","lastTransitionTime":"2025-10-06T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.688658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.688723 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.688743 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.688768 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.688785 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:38Z","lastTransitionTime":"2025-10-06T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.792606 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.792674 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.792693 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.792719 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.792738 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:38Z","lastTransitionTime":"2025-10-06T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.896320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.896412 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.896478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.896504 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.896571 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:38Z","lastTransitionTime":"2025-10-06T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.999592 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.999624 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.999634 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:38 crc kubenswrapper[4769]: I1006 07:17:38.999649 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:38.999661 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:38Z","lastTransitionTime":"2025-10-06T07:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.101628 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.101674 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.101687 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.101703 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.101714 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:39Z","lastTransitionTime":"2025-10-06T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.165517 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.165582 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:39 crc kubenswrapper[4769]: E1006 07:17:39.165660 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.165539 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:39 crc kubenswrapper[4769]: E1006 07:17:39.165808 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:39 crc kubenswrapper[4769]: E1006 07:17:39.165971 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.204226 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.204270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.204281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.204301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.204314 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:39Z","lastTransitionTime":"2025-10-06T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.306219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.306262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.306271 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.306316 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.306329 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:39Z","lastTransitionTime":"2025-10-06T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.408681 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.408722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.408734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.408752 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.408763 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:39Z","lastTransitionTime":"2025-10-06T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.510256 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.510287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.510320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.510333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.510343 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:39Z","lastTransitionTime":"2025-10-06T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.613412 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.613467 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.613478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.613492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.613503 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:39Z","lastTransitionTime":"2025-10-06T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.716204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.716245 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.716255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.716268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.716277 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:39Z","lastTransitionTime":"2025-10-06T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.818399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.818445 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.818453 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.818465 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.818474 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:39Z","lastTransitionTime":"2025-10-06T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.920906 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.920936 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.920946 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.920961 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:39 crc kubenswrapper[4769]: I1006 07:17:39.920970 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:39Z","lastTransitionTime":"2025-10-06T07:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.022983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.023020 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.023029 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.023044 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.023053 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:40Z","lastTransitionTime":"2025-10-06T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.126547 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.126598 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.126612 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.126633 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.126650 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:40Z","lastTransitionTime":"2025-10-06T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.165735 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:40 crc kubenswrapper[4769]: E1006 07:17:40.165933 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.230289 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.230361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.230393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.230412 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.230442 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:40Z","lastTransitionTime":"2025-10-06T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.332891 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.332926 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.332937 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.332954 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.332964 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:40Z","lastTransitionTime":"2025-10-06T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.434985 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.435062 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.435079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.435095 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.435107 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:40Z","lastTransitionTime":"2025-10-06T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.537731 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.537764 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.537772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.537786 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.537797 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:40Z","lastTransitionTime":"2025-10-06T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.639999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.640053 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.640064 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.640081 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.640093 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:40Z","lastTransitionTime":"2025-10-06T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.743050 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.743116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.743131 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.743152 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.743168 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:40Z","lastTransitionTime":"2025-10-06T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.850530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.850578 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.850589 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.850610 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.850620 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:40Z","lastTransitionTime":"2025-10-06T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.952981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.953026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.953038 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.953055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:40 crc kubenswrapper[4769]: I1006 07:17:40.953066 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:40Z","lastTransitionTime":"2025-10-06T07:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.055291 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.055322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.055338 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.055351 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.055360 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.157937 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.157986 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.157999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.158022 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.158035 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.165216 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.165258 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.165278 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:41 crc kubenswrapper[4769]: E1006 07:17:41.165352 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:41 crc kubenswrapper[4769]: E1006 07:17:41.165480 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:41 crc kubenswrapper[4769]: E1006 07:17:41.165585 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.259905 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.259936 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.259970 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.259985 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.259994 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.362298 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.362352 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.362369 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.362393 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.362410 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.464515 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.464566 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.464579 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.464596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.464608 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.566903 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.566943 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.566951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.566967 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.566975 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.668821 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.668860 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.668871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.668887 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.668898 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.771302 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.771334 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.771341 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.771355 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.771365 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.873620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.873659 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.873670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.873684 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.873695 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.889186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.889227 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.889243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.889258 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.889268 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: E1006 07:17:41.900231 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:41Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.903252 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.903287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.903306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.903321 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.903332 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: E1006 07:17:41.913731 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:41Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.916963 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.916998 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.917008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.917021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.917031 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: E1006 07:17:41.930932 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:41Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.933823 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.933853 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.933864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.933883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.933899 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: E1006 07:17:41.945699 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:41Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.949131 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.949232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.949302 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.949390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.949475 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:41 crc kubenswrapper[4769]: E1006 07:17:41.961468 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:41Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:41 crc kubenswrapper[4769]: E1006 07:17:41.961767 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.977563 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.977615 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.977632 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.977655 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:41 crc kubenswrapper[4769]: I1006 07:17:41.977674 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:41Z","lastTransitionTime":"2025-10-06T07:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.080833 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.080911 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.080936 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.080971 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.080994 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:42Z","lastTransitionTime":"2025-10-06T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.166213 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:42 crc kubenswrapper[4769]: E1006 07:17:42.167101 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.183751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.183815 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.183843 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.183871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.183891 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:42Z","lastTransitionTime":"2025-10-06T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.286578 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.286645 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.286661 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.286683 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.286698 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:42Z","lastTransitionTime":"2025-10-06T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.390314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.390362 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.390372 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.390399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.390410 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:42Z","lastTransitionTime":"2025-10-06T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.495702 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.495780 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.495811 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.495866 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.495890 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:42Z","lastTransitionTime":"2025-10-06T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.599030 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.599071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.599082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.599098 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.599113 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:42Z","lastTransitionTime":"2025-10-06T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.702143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.702219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.702231 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.702248 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.702288 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:42Z","lastTransitionTime":"2025-10-06T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.805559 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.805620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.805630 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.805668 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.805681 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:42Z","lastTransitionTime":"2025-10-06T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.908456 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.908520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.908529 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.908544 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:42 crc kubenswrapper[4769]: I1006 07:17:42.908553 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:42Z","lastTransitionTime":"2025-10-06T07:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.011840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.011917 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.011940 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.011974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.011998 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:43Z","lastTransitionTime":"2025-10-06T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.115143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.115204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.115222 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.115247 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.115267 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:43Z","lastTransitionTime":"2025-10-06T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.164952 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.165036 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:43 crc kubenswrapper[4769]: E1006 07:17:43.165098 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.165179 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:43 crc kubenswrapper[4769]: E1006 07:17:43.165215 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:43 crc kubenswrapper[4769]: E1006 07:17:43.165479 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.218515 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.218573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.218588 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.218605 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.218616 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:43Z","lastTransitionTime":"2025-10-06T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.322192 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.322273 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.322299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.322337 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.322368 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:43Z","lastTransitionTime":"2025-10-06T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.425384 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.425485 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.425505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.425539 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.425560 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:43Z","lastTransitionTime":"2025-10-06T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.528941 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.529008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.529027 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.529057 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.529079 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:43Z","lastTransitionTime":"2025-10-06T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.632657 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.632725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.632767 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.632795 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.632810 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:43Z","lastTransitionTime":"2025-10-06T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.735123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.735167 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.735182 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.735202 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.735211 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:43Z","lastTransitionTime":"2025-10-06T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.837455 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.837506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.837518 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.837535 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.837549 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:43Z","lastTransitionTime":"2025-10-06T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.939637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.939694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.939711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.939729 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:43 crc kubenswrapper[4769]: I1006 07:17:43.939741 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:43Z","lastTransitionTime":"2025-10-06T07:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.043164 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.043204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.043218 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.043261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.043272 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:44Z","lastTransitionTime":"2025-10-06T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.146061 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.146329 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.146337 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.146349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.146359 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:44Z","lastTransitionTime":"2025-10-06T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.165043 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:44 crc kubenswrapper[4769]: E1006 07:17:44.165313 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.180646 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.193854 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.205948 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.214428 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.223976 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.242529 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.247529 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.247591 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.247605 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.247623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.247634 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:44Z","lastTransitionTime":"2025-10-06T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.256611 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.269075 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.282021 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.293854 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e287f5-b174-4596-b1ff-af5378e54fe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.308128 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.319920 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.334445 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.351141 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.351446 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.351543 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.351609 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.351711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.351790 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:44Z","lastTransitionTime":"2025-10-06T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.369433 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.386286 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:34Z\\\",\\\"message\\\":\\\"e-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 07:17:34.570922 6431 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s8l5j\\\\nF1006 07:17:34.573019 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: cu\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.398328 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:44Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.454251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.454294 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.454306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.454320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.454330 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:44Z","lastTransitionTime":"2025-10-06T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.557575 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.557660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.557679 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.557704 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.557722 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:44Z","lastTransitionTime":"2025-10-06T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.660183 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.660219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.660233 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.660251 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.660265 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:44Z","lastTransitionTime":"2025-10-06T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.762494 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.762538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.762549 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.762562 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.762572 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:44Z","lastTransitionTime":"2025-10-06T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.864797 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.864863 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.864872 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.864886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.864898 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:44Z","lastTransitionTime":"2025-10-06T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.968253 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.968364 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.968389 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.968458 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:44 crc kubenswrapper[4769]: I1006 07:17:44.968489 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:44Z","lastTransitionTime":"2025-10-06T07:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.070833 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.070876 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.070892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.070907 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.070918 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:45Z","lastTransitionTime":"2025-10-06T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.165101 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.165173 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.165245 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:45 crc kubenswrapper[4769]: E1006 07:17:45.165235 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:45 crc kubenswrapper[4769]: E1006 07:17:45.165408 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:45 crc kubenswrapper[4769]: E1006 07:17:45.165510 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.172866 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.172937 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.172963 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.172992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.173016 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:45Z","lastTransitionTime":"2025-10-06T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.275609 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.275653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.275664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.275682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.275698 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:45Z","lastTransitionTime":"2025-10-06T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.378339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.378376 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.378387 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.378402 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.378410 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:45Z","lastTransitionTime":"2025-10-06T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.480797 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.480866 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.480883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.480908 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.480926 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:45Z","lastTransitionTime":"2025-10-06T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.583225 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.583287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.583297 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.583311 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.583320 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:45Z","lastTransitionTime":"2025-10-06T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.686116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.686168 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.686178 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.686193 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.686202 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:45Z","lastTransitionTime":"2025-10-06T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.788857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.788921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.788933 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.788948 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.788960 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:45Z","lastTransitionTime":"2025-10-06T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.891120 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.891163 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.891172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.891186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.891195 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:45Z","lastTransitionTime":"2025-10-06T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.993695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.993754 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.993769 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.993789 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:45 crc kubenswrapper[4769]: I1006 07:17:45.993803 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:45Z","lastTransitionTime":"2025-10-06T07:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.095658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.095705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.095716 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.095732 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.095744 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:46Z","lastTransitionTime":"2025-10-06T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.166065 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:46 crc kubenswrapper[4769]: E1006 07:17:46.166271 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.198638 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.198715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.199071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.199119 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.199146 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:46Z","lastTransitionTime":"2025-10-06T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.302387 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.302460 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.302476 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.302498 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.302512 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:46Z","lastTransitionTime":"2025-10-06T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.405814 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.405875 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.405889 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.405917 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.405934 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:46Z","lastTransitionTime":"2025-10-06T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.509676 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.509720 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.509733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.509768 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.509785 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:46Z","lastTransitionTime":"2025-10-06T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.613481 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.613536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.613548 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.613566 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.613678 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:46Z","lastTransitionTime":"2025-10-06T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.715815 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.715881 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.715900 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.715925 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.715943 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:46Z","lastTransitionTime":"2025-10-06T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.818878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.818920 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.818933 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.818953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.818967 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:46Z","lastTransitionTime":"2025-10-06T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.921923 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.921973 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.921984 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.922000 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:46 crc kubenswrapper[4769]: I1006 07:17:46.922013 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:46Z","lastTransitionTime":"2025-10-06T07:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.024989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.025047 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.025064 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.025085 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.025102 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:47Z","lastTransitionTime":"2025-10-06T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.127507 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.127564 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.127581 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.127608 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.127630 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:47Z","lastTransitionTime":"2025-10-06T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.165925 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.165996 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.165996 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:47 crc kubenswrapper[4769]: E1006 07:17:47.166212 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:47 crc kubenswrapper[4769]: E1006 07:17:47.166350 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:47 crc kubenswrapper[4769]: E1006 07:17:47.166510 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.167618 4769 scope.go:117] "RemoveContainer" containerID="6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830" Oct 06 07:17:47 crc kubenswrapper[4769]: E1006 07:17:47.168002 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.230990 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.231054 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.231073 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.231098 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.231116 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:47Z","lastTransitionTime":"2025-10-06T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.334712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.334764 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.334779 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.334802 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.334817 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:47Z","lastTransitionTime":"2025-10-06T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.438725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.438795 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.438819 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.438849 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.438870 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:47Z","lastTransitionTime":"2025-10-06T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.542190 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.542370 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.542397 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.542477 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.542519 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:47Z","lastTransitionTime":"2025-10-06T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.645456 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.645505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.645514 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.645530 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.645543 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:47Z","lastTransitionTime":"2025-10-06T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.748486 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.748545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.748560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.748583 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.748602 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:47Z","lastTransitionTime":"2025-10-06T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.851341 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.851388 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.851400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.851442 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.851455 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:47Z","lastTransitionTime":"2025-10-06T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.954314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.954392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.954416 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.954488 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:47 crc kubenswrapper[4769]: I1006 07:17:47.954511 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:47Z","lastTransitionTime":"2025-10-06T07:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.057618 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.057683 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.057705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.057729 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.057746 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:48Z","lastTransitionTime":"2025-10-06T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.160977 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.161130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.161157 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.161184 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.161202 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:48Z","lastTransitionTime":"2025-10-06T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.165564 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:48 crc kubenswrapper[4769]: E1006 07:17:48.165740 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.263505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.263539 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.263548 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.263563 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.263572 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:48Z","lastTransitionTime":"2025-10-06T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.365908 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.365985 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.365995 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.366007 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.366016 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:48Z","lastTransitionTime":"2025-10-06T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.468695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.468751 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.468769 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.468787 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.468798 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:48Z","lastTransitionTime":"2025-10-06T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.571810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.571883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.571899 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.571923 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.571951 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:48Z","lastTransitionTime":"2025-10-06T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.674665 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.674725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.674738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.674758 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.674774 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:48Z","lastTransitionTime":"2025-10-06T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.777991 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.778035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.778049 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.778069 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.778081 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:48Z","lastTransitionTime":"2025-10-06T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.881457 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.881506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.881519 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.881538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.881549 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:48Z","lastTransitionTime":"2025-10-06T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.984195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.984229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.984237 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.984249 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:48 crc kubenswrapper[4769]: I1006 07:17:48.984258 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:48Z","lastTransitionTime":"2025-10-06T07:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.086786 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.086813 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.086824 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.086839 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.086851 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:49Z","lastTransitionTime":"2025-10-06T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.165664 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.165750 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:49 crc kubenswrapper[4769]: E1006 07:17:49.165865 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:49 crc kubenswrapper[4769]: E1006 07:17:49.165934 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.165939 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:49 crc kubenswrapper[4769]: E1006 07:17:49.166102 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.189878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.189908 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.189917 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.189930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.189943 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:49Z","lastTransitionTime":"2025-10-06T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.292392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.292447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.292457 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.292473 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.292483 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:49Z","lastTransitionTime":"2025-10-06T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.395041 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.395074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.395083 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.395101 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.395114 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:49Z","lastTransitionTime":"2025-10-06T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.497625 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.498793 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.498955 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.499102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.499232 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:49Z","lastTransitionTime":"2025-10-06T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.601853 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.601914 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.601931 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.601954 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.601970 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:49Z","lastTransitionTime":"2025-10-06T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.704930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.704969 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.704980 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.704995 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.705005 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:49Z","lastTransitionTime":"2025-10-06T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.807232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.807302 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.807321 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.807345 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.807365 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:49Z","lastTransitionTime":"2025-10-06T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.909620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.909649 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.909657 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.909671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:49 crc kubenswrapper[4769]: I1006 07:17:49.909679 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:49Z","lastTransitionTime":"2025-10-06T07:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.011708 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.011975 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.012052 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.012123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.012198 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:50Z","lastTransitionTime":"2025-10-06T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.115728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.116016 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.116122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.116261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.116362 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:50Z","lastTransitionTime":"2025-10-06T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.165729 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:50 crc kubenswrapper[4769]: E1006 07:17:50.165870 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.218706 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.219111 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.219312 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.219549 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.219713 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:50Z","lastTransitionTime":"2025-10-06T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.322180 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.322239 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.322253 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.322275 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.322287 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:50Z","lastTransitionTime":"2025-10-06T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.424410 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.424505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.424518 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.424533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.424544 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:50Z","lastTransitionTime":"2025-10-06T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.527597 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.527681 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.527698 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.527715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.527727 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:50Z","lastTransitionTime":"2025-10-06T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.630276 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.630533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.630555 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.630581 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.630600 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:50Z","lastTransitionTime":"2025-10-06T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.732658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.732697 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.732707 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.732721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.732731 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:50Z","lastTransitionTime":"2025-10-06T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.834964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.835003 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.835016 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.835029 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.835039 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:50Z","lastTransitionTime":"2025-10-06T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.937361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.937471 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.937498 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.937527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:50 crc kubenswrapper[4769]: I1006 07:17:50.937546 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:50Z","lastTransitionTime":"2025-10-06T07:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.041676 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.041712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.041722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.041736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.041749 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:51Z","lastTransitionTime":"2025-10-06T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.143475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.143713 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.143800 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.143913 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.143985 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:51Z","lastTransitionTime":"2025-10-06T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.164961 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.165081 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.164982 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:51 crc kubenswrapper[4769]: E1006 07:17:51.165290 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:51 crc kubenswrapper[4769]: E1006 07:17:51.165403 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:51 crc kubenswrapper[4769]: E1006 07:17:51.165568 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.246584 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.246634 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.246657 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.246674 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.246685 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:51Z","lastTransitionTime":"2025-10-06T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.349214 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.349263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.349274 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.349292 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.349302 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:51Z","lastTransitionTime":"2025-10-06T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.451314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.451364 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.451401 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.451451 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.451462 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:51Z","lastTransitionTime":"2025-10-06T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.553771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.553829 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.553847 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.553877 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.553921 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:51Z","lastTransitionTime":"2025-10-06T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.656245 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.656302 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.656320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.656348 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.656367 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:51Z","lastTransitionTime":"2025-10-06T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.758908 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.758992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.759019 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.759048 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.759065 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:51Z","lastTransitionTime":"2025-10-06T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.861190 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.861229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.861241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.861259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.861271 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:51Z","lastTransitionTime":"2025-10-06T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.965013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.965053 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.965064 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.965079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:51 crc kubenswrapper[4769]: I1006 07:17:51.965094 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:51Z","lastTransitionTime":"2025-10-06T07:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.067992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.068035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.068048 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.068069 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.068082 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.141297 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:52 crc kubenswrapper[4769]: E1006 07:17:52.141503 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:52 crc kubenswrapper[4769]: E1006 07:17:52.141597 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs podName:cbddd0e8-9d17-4278-acdc-e35d2d8d70f9 nodeName:}" failed. No retries permitted until 2025-10-06 07:18:24.141574136 +0000 UTC m=+100.665855313 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs") pod "network-metrics-daemon-wxwxs" (UID: "cbddd0e8-9d17-4278-acdc-e35d2d8d70f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.165804 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:52 crc kubenswrapper[4769]: E1006 07:17:52.166032 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.170980 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.171018 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.171028 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.171044 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.171054 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.215195 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.215434 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.215532 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.215637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.215722 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: E1006 07:17:52.232265 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:52Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.237349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.237474 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.237551 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.237620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.237682 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: E1006 07:17:52.254876 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:52Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.259217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.259296 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.259315 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.259345 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.259372 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: E1006 07:17:52.278762 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:52Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.284483 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.284559 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.284583 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.284614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.284638 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: E1006 07:17:52.300390 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:52Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.304800 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.304838 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.304851 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.304871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.304884 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: E1006 07:17:52.317337 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:52Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:52 crc kubenswrapper[4769]: E1006 07:17:52.317481 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.319188 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.319227 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.319238 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.319254 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.319266 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.422193 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.422230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.422242 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.422297 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.422313 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.525185 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.525232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.525247 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.525270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.525287 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.627719 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.627774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.627784 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.627801 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.627811 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.730215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.730280 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.730299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.730326 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.730342 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.832344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.832398 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.832415 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.832463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.832481 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.934676 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.934965 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.935084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.935212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:52 crc kubenswrapper[4769]: I1006 07:17:52.935345 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:52Z","lastTransitionTime":"2025-10-06T07:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.037931 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.037996 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.038013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.038036 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.038053 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:53Z","lastTransitionTime":"2025-10-06T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.140274 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.140319 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.140328 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.140343 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.140352 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:53Z","lastTransitionTime":"2025-10-06T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.165239 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.165363 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.165368 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:53 crc kubenswrapper[4769]: E1006 07:17:53.165508 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:53 crc kubenswrapper[4769]: E1006 07:17:53.165644 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:53 crc kubenswrapper[4769]: E1006 07:17:53.165755 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.243209 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.243255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.243264 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.243282 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.243295 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:53Z","lastTransitionTime":"2025-10-06T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.346129 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.346177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.346191 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.346210 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.346224 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:53Z","lastTransitionTime":"2025-10-06T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.460201 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.460252 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.460270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.460292 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.460309 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:53Z","lastTransitionTime":"2025-10-06T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.562765 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.562814 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.562823 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.562839 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.562850 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:53Z","lastTransitionTime":"2025-10-06T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.664862 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.664905 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.664916 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.664932 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.664944 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:53Z","lastTransitionTime":"2025-10-06T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.767878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.768015 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.768100 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.768182 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.768228 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:53Z","lastTransitionTime":"2025-10-06T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.870701 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.870736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.870748 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.870764 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.870777 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:53Z","lastTransitionTime":"2025-10-06T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.973188 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.973229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.973239 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.973254 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:53 crc kubenswrapper[4769]: I1006 07:17:53.973264 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:53Z","lastTransitionTime":"2025-10-06T07:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.075771 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.075808 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.075817 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.075831 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.075839 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:54Z","lastTransitionTime":"2025-10-06T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.165735 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:54 crc kubenswrapper[4769]: E1006 07:17:54.165901 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.178022 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.178061 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.178071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.178102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.178116 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:54Z","lastTransitionTime":"2025-10-06T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.189915 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.209279 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.235403 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.253402 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.273874 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.280698 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.280756 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.280776 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.280806 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.280826 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:54Z","lastTransitionTime":"2025-10-06T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.289341 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.301706 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.322318 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.337999 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.351867 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e287f5-b174-4596-b1ff-af5378e54fe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.367554 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.384279 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.384337 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.384349 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.384372 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.384455 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:54Z","lastTransitionTime":"2025-10-06T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.386502 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.401825 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.416553 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.436294 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.454697 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:34Z\\\",\\\"message\\\":\\\"e-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 07:17:34.570922 6431 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s8l5j\\\\nF1006 07:17:34.573019 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: cu\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.467158 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.486107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.486134 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.486142 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.486154 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.486164 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:54Z","lastTransitionTime":"2025-10-06T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.548038 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjjvp_3b98abd5-990e-494c-a2a5-526fae1bd5ec/kube-multus/0.log" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.548089 4769 generic.go:334] "Generic (PLEG): container finished" podID="3b98abd5-990e-494c-a2a5-526fae1bd5ec" containerID="7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0" exitCode=1 Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.548119 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjjvp" event={"ID":"3b98abd5-990e-494c-a2a5-526fae1bd5ec","Type":"ContainerDied","Data":"7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0"} Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.548476 4769 scope.go:117] "RemoveContainer" containerID="7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.565748 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.579035 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.589342 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.589380 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.589455 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.589479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.589490 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:54Z","lastTransitionTime":"2025-10-06T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.627735 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.639681 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.652068 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.662759 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.675572 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.689970 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:54Z\\\",\\\"message\\\":\\\"2025-10-06T07:17:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5\\\\n2025-10-06T07:17:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5 to /host/opt/cni/bin/\\\\n2025-10-06T07:17:09Z [verbose] multus-daemon started\\\\n2025-10-06T07:17:09Z [verbose] Readiness Indicator file check\\\\n2025-10-06T07:17:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.692802 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.692844 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.692856 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.692873 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.692883 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:54Z","lastTransitionTime":"2025-10-06T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.699467 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.713149 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e287f5-b174-4596-b1ff-af5378e54fe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.730664 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.742651 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.755520 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.768592 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.783074 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.795302 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.795342 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.795352 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.795365 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.795375 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:54Z","lastTransitionTime":"2025-10-06T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.811310 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:34Z\\\",\\\"message\\\":\\\"e-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 07:17:34.570922 6431 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s8l5j\\\\nF1006 07:17:34.573019 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: cu\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.824553 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:54Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.897504 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.897536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.897573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.897587 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:54 crc kubenswrapper[4769]: I1006 07:17:54.897596 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:54Z","lastTransitionTime":"2025-10-06T07:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.000360 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.000447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.000466 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.000492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.000512 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:55Z","lastTransitionTime":"2025-10-06T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.103032 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.103076 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.103087 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.103102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.103114 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:55Z","lastTransitionTime":"2025-10-06T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.165448 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.165620 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:55 crc kubenswrapper[4769]: E1006 07:17:55.165781 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.165823 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:55 crc kubenswrapper[4769]: E1006 07:17:55.165986 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:55 crc kubenswrapper[4769]: E1006 07:17:55.166202 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.205767 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.205818 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.205830 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.205845 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.205856 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:55Z","lastTransitionTime":"2025-10-06T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.308943 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.309017 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.309035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.309059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.309078 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:55Z","lastTransitionTime":"2025-10-06T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.411621 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.411680 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.411698 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.411724 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.411742 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:55Z","lastTransitionTime":"2025-10-06T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.514730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.514800 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.514825 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.514854 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.514875 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:55Z","lastTransitionTime":"2025-10-06T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.554205 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjjvp_3b98abd5-990e-494c-a2a5-526fae1bd5ec/kube-multus/0.log" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.554284 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjjvp" event={"ID":"3b98abd5-990e-494c-a2a5-526fae1bd5ec","Type":"ContainerStarted","Data":"94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b"} Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.573202 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.614957 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.616929 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.616970 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.616984 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.617002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.617020 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:55Z","lastTransitionTime":"2025-10-06T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.657246 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:34Z\\\",\\\"message\\\":\\\"e-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 07:17:34.570922 6431 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s8l5j\\\\nF1006 07:17:34.573019 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: cu\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.675614 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.690702 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.704174 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.718882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.718909 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.718918 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.718931 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.718953 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:55Z","lastTransitionTime":"2025-10-06T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.721204 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.731912 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.743134 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.756499 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.769255 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.782099 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.794308 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:54Z\\\",\\\"message\\\":\\\"2025-10-06T07:17:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5\\\\n2025-10-06T07:17:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5 to /host/opt/cni/bin/\\\\n2025-10-06T07:17:09Z [verbose] multus-daemon started\\\\n2025-10-06T07:17:09Z [verbose] Readiness Indicator file check\\\\n2025-10-06T07:17:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.805531 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e287f5-b174-4596-b1ff-af5378e54fe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.817700 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.821133 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.821168 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.821176 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.821190 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.821201 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:55Z","lastTransitionTime":"2025-10-06T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.830229 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.843283 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:17:55Z is after 2025-08-24T17:21:41Z" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.923186 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.923234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.923245 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.923260 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:55 crc kubenswrapper[4769]: I1006 07:17:55.923269 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:55Z","lastTransitionTime":"2025-10-06T07:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.025899 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.025947 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.025958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.025973 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.025983 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:56Z","lastTransitionTime":"2025-10-06T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.128733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.128863 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.128882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.128904 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.128924 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:56Z","lastTransitionTime":"2025-10-06T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.165530 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:56 crc kubenswrapper[4769]: E1006 07:17:56.165720 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.231515 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.231588 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.231600 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.231623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.231638 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:56Z","lastTransitionTime":"2025-10-06T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.334788 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.334868 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.334890 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.334919 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.334941 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:56Z","lastTransitionTime":"2025-10-06T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.437014 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.437158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.437245 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.437330 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.437361 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:56Z","lastTransitionTime":"2025-10-06T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.539850 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.539909 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.539923 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.539944 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.539959 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:56Z","lastTransitionTime":"2025-10-06T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.647632 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.647670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.647681 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.647700 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.647711 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:56Z","lastTransitionTime":"2025-10-06T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.751062 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.751126 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.751144 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.751169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.751240 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:56Z","lastTransitionTime":"2025-10-06T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.853978 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.854049 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.854069 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.854524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.854588 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:56Z","lastTransitionTime":"2025-10-06T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.957746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.957821 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.957844 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.957873 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:56 crc kubenswrapper[4769]: I1006 07:17:56.957895 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:56Z","lastTransitionTime":"2025-10-06T07:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.060620 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.060684 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.060703 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.061236 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.061300 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:57Z","lastTransitionTime":"2025-10-06T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.165205 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.165236 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.165217 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:57 crc kubenswrapper[4769]: E1006 07:17:57.165342 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:57 crc kubenswrapper[4769]: E1006 07:17:57.165440 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:57 crc kubenswrapper[4769]: E1006 07:17:57.165545 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.166151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.166215 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.166233 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.166654 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.166705 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:57Z","lastTransitionTime":"2025-10-06T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.271642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.271709 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.271730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.271763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.271786 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:57Z","lastTransitionTime":"2025-10-06T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.376321 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.376390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.376403 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.376452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.376467 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:57Z","lastTransitionTime":"2025-10-06T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.480399 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.480491 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.480508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.480537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.480555 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:57Z","lastTransitionTime":"2025-10-06T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.583336 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.583365 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.583373 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.583386 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.583397 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:57Z","lastTransitionTime":"2025-10-06T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.686298 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.686341 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.686350 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.686367 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.686377 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:57Z","lastTransitionTime":"2025-10-06T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.788706 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.788750 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.788760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.788776 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.788786 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:57Z","lastTransitionTime":"2025-10-06T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.891260 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.891311 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.891325 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.891341 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.891350 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:57Z","lastTransitionTime":"2025-10-06T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.994803 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.994867 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.994892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.994915 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:57 crc kubenswrapper[4769]: I1006 07:17:57.994934 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:57Z","lastTransitionTime":"2025-10-06T07:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.097516 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.097592 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.097614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.097640 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.097660 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:58Z","lastTransitionTime":"2025-10-06T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.165149 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:17:58 crc kubenswrapper[4769]: E1006 07:17:58.165482 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.200589 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.200629 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.200637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.200653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.200665 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:58Z","lastTransitionTime":"2025-10-06T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.304138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.304207 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.304228 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.304258 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.304279 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:58Z","lastTransitionTime":"2025-10-06T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.407969 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.408016 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.408026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.408045 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.408056 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:58Z","lastTransitionTime":"2025-10-06T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.510937 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.511001 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.511018 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.511042 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.511061 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:58Z","lastTransitionTime":"2025-10-06T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.613648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.613701 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.613719 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.613738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.613754 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:58Z","lastTransitionTime":"2025-10-06T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.717242 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.717322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.717345 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.717378 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.717402 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:58Z","lastTransitionTime":"2025-10-06T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.820312 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.820365 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.820380 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.820400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.820415 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:58Z","lastTransitionTime":"2025-10-06T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.924061 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.924140 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.924157 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.924190 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:58 crc kubenswrapper[4769]: I1006 07:17:58.924210 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:58Z","lastTransitionTime":"2025-10-06T07:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.029622 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.029722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.029750 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.029789 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.029826 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:59Z","lastTransitionTime":"2025-10-06T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.132352 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.132383 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.132390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.132403 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.132412 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:59Z","lastTransitionTime":"2025-10-06T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.165404 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:17:59 crc kubenswrapper[4769]: E1006 07:17:59.165628 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.167077 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:17:59 crc kubenswrapper[4769]: E1006 07:17:59.167297 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.167666 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:17:59 crc kubenswrapper[4769]: E1006 07:17:59.167985 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.180236 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.234858 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.234888 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.234900 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.234914 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.234923 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:59Z","lastTransitionTime":"2025-10-06T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.336685 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.336714 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.336722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.336736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.336744 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:59Z","lastTransitionTime":"2025-10-06T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.439358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.439406 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.439455 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.439478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.439494 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:59Z","lastTransitionTime":"2025-10-06T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.542063 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.542126 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.542148 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.542178 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.542205 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:59Z","lastTransitionTime":"2025-10-06T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.645505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.645543 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.645551 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.645564 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.645574 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:59Z","lastTransitionTime":"2025-10-06T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.747936 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.748010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.748037 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.748067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.748093 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:59Z","lastTransitionTime":"2025-10-06T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.851825 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.851880 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.851921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.851947 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.851966 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:59Z","lastTransitionTime":"2025-10-06T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.954059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.954130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.954143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.954161 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:17:59 crc kubenswrapper[4769]: I1006 07:17:59.954173 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:17:59Z","lastTransitionTime":"2025-10-06T07:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.057799 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.057864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.057882 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.057908 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.057930 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:00Z","lastTransitionTime":"2025-10-06T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.161298 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.161397 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.161465 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.161507 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.161530 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:00Z","lastTransitionTime":"2025-10-06T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.165685 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:00 crc kubenswrapper[4769]: E1006 07:18:00.166304 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.167133 4769 scope.go:117] "RemoveContainer" containerID="6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.264735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.264813 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.264835 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.264866 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.264886 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:00Z","lastTransitionTime":"2025-10-06T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.368020 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.368064 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.368073 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.368092 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.368101 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:00Z","lastTransitionTime":"2025-10-06T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.470630 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.470679 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.470777 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.470804 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.470823 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:00Z","lastTransitionTime":"2025-10-06T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.573001 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.573065 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.573081 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.573108 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.573122 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:00Z","lastTransitionTime":"2025-10-06T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.573186 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/2.log" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.578309 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629"} Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.579803 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.596274 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.613439 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.629843 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.648115 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:54Z\\\",\\\"message\\\":\\\"2025-10-06T07:17:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5\\\\n2025-10-06T07:17:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5 to /host/opt/cni/bin/\\\\n2025-10-06T07:17:09Z [verbose] multus-daemon started\\\\n2025-10-06T07:17:09Z [verbose] Readiness Indicator file check\\\\n2025-10-06T07:17:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.670152 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.675662 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.675702 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.675712 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.675730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.675741 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:00Z","lastTransitionTime":"2025-10-06T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.689875 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.709229 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.727173 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.740770 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e287f5-b174-4596-b1ff-af5378e54fe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.751446 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dec66d4-a680-4dff-8583-1cd86ae56d1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1be61ede3642b697eb5935aa7fc86cf6b91fb82c20e6ad664ded64cd1e044d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.769930 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.777942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.777966 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.777974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.777987 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.777997 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:00Z","lastTransitionTime":"2025-10-06T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.790765 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:34Z\\\",\\\"message\\\":\\\"e-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 07:17:34.570922 6431 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s8l5j\\\\nF1006 07:17:34.573019 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: cu\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:18:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.809438 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.826240 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.836542 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.849943 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.861795 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.874574 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:00Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.880362 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.880433 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.880445 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.880462 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.880475 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:00Z","lastTransitionTime":"2025-10-06T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.982706 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.982744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.982753 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.982766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:00 crc kubenswrapper[4769]: I1006 07:18:00.982777 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:00Z","lastTransitionTime":"2025-10-06T07:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.085259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.085298 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.085306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.085320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.085332 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:01Z","lastTransitionTime":"2025-10-06T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.165115 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.165147 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.165115 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:01 crc kubenswrapper[4769]: E1006 07:18:01.165234 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:01 crc kubenswrapper[4769]: E1006 07:18:01.165357 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:01 crc kubenswrapper[4769]: E1006 07:18:01.165497 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.188217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.188265 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.188277 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.188289 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.188300 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:01Z","lastTransitionTime":"2025-10-06T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.291375 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.291433 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.291443 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.291457 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.291466 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:01Z","lastTransitionTime":"2025-10-06T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.393920 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.393980 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.393992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.394008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.394020 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:01Z","lastTransitionTime":"2025-10-06T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.496509 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.496547 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.496607 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.496625 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.496636 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:01Z","lastTransitionTime":"2025-10-06T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.582770 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/3.log" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.583369 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/2.log" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.585562 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629" exitCode=1 Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.585614 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629"} Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.585672 4769 scope.go:117] "RemoveContainer" containerID="6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.586301 4769 scope.go:117] "RemoveContainer" containerID="f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629" Oct 06 07:18:01 crc kubenswrapper[4769]: E1006 07:18:01.586493 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.598883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.598918 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.598926 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.598942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.598955 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:01Z","lastTransitionTime":"2025-10-06T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.606096 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.622964 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.640239 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.652167 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.663353 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.676609 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.690154 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.701084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.701129 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.701141 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.701158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.701171 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:01Z","lastTransitionTime":"2025-10-06T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.705623 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.724928 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:54Z\\\",\\\"message\\\":\\\"2025-10-06T07:17:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5\\\\n2025-10-06T07:17:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5 to /host/opt/cni/bin/\\\\n2025-10-06T07:17:09Z [verbose] multus-daemon started\\\\n2025-10-06T07:17:09Z [verbose] Readiness Indicator file check\\\\n2025-10-06T07:17:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.740830 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e287f5-b174-4596-b1ff-af5378e54fe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.764209 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.780612 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.792925 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.803448 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.803505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.803518 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.803537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.803553 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:01Z","lastTransitionTime":"2025-10-06T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.814339 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.828698 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dec66d4-a680-4dff-8583-1cd86ae56d1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1be61ede3642b697eb5935aa7fc86cf6b91fb82c20e6ad664ded64cd1e044d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.843445 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.862922 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3717453ed5162cc00aceacf21dd900550ce17a99ed77dee5eb42d43fd48830\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:34Z\\\",\\\"message\\\":\\\"e-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1006 07:17:34.570922 6431 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s8l5j\\\\nF1006 07:17:34.573019 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: cu\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:18:01Z\\\",\\\"message\\\":\\\" openshift-apiserver/api for network=default are: map[]\\\\nI1006 07:18:01.081408 6803 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1006 07:18:01.081442 6803 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1006 07:18:01.081455 6803 lb_config.go:1031] Cluster endpoints for openshift-kube-scheduler-operator/metrics for network=default are: map[]\\\\nF1006 07:18:01.081460 6803 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z]\\\\nI1006 07:18:01.081469 6803 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:18:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.877195 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.906865 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.906964 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.906993 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.907035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:01 crc kubenswrapper[4769]: I1006 07:18:01.907062 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:01Z","lastTransitionTime":"2025-10-06T07:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.010136 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.010185 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.010200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.010222 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.010239 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.118920 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.119000 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.119047 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.119074 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.119154 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.165803 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:02 crc kubenswrapper[4769]: E1006 07:18:02.165995 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.223384 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.223493 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.223513 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.223534 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.223555 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.326760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.326833 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.326850 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.326873 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.326892 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.429376 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.429435 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.429446 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.429462 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.429471 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.514330 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.514465 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.514475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.514489 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.514499 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: E1006 07:18:02.527280 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.532167 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.532438 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.532521 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.532592 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.532659 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: E1006 07:18:02.544926 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.549340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.549462 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.549534 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.549599 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.549657 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: E1006 07:18:02.567550 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.571647 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.571762 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.571827 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.571905 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.571976 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: E1006 07:18:02.584175 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.588506 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.588627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.588705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.588791 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.589066 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.590882 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/3.log" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.593417 4769 scope.go:117] "RemoveContainer" containerID="f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629" Oct 06 07:18:02 crc kubenswrapper[4769]: E1006 07:18:02.593632 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" Oct 06 07:18:02 crc kubenswrapper[4769]: E1006 07:18:02.603306 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: E1006 07:18:02.603663 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.606275 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.606325 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dec66d4-a680-4dff-8583-1cd86ae56d1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1be61ede3642b697eb5935aa7fc86cf6b91fb82c20e6ad664ded64cd1e044d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.606481 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.606554 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.606628 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.606701 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.620918 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.650234 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:18:01Z\\\",\\\"message\\\":\\\" openshift-apiserver/api for network=default are: map[]\\\\nI1006 07:18:01.081408 6803 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1006 07:18:01.081442 6803 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1006 07:18:01.081455 6803 lb_config.go:1031] Cluster endpoints for openshift-kube-scheduler-operator/metrics for network=default are: map[]\\\\nF1006 07:18:01.081460 6803 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z]\\\\nI1006 07:18:01.081469 6803 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:18:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.667344 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.682444 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.697619 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.709521 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.709767 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.709836 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.709904 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.709976 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.723030 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.739122 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.757231 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.771695 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.786169 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.799905 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.813536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.813651 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.813680 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.813721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.813751 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.823810 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:54Z\\\",\\\"message\\\":\\\"2025-10-06T07:17:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5\\\\n2025-10-06T07:17:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5 to /host/opt/cni/bin/\\\\n2025-10-06T07:17:09Z [verbose] multus-daemon started\\\\n2025-10-06T07:17:09Z [verbose] Readiness Indicator file check\\\\n2025-10-06T07:17:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.841099 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.868081 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.888449 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.903314 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.916658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.916695 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.916707 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.916724 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.916736 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:02Z","lastTransitionTime":"2025-10-06T07:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:02 crc kubenswrapper[4769]: I1006 07:18:02.917377 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e287f5-b174-4596-b1ff-af5378e54fe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:02Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.019287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.019364 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.019379 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.019430 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.019448 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:03Z","lastTransitionTime":"2025-10-06T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.122657 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.122706 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.122718 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.122736 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.122750 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:03Z","lastTransitionTime":"2025-10-06T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.165451 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.165484 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:03 crc kubenswrapper[4769]: E1006 07:18:03.165581 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.165654 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:03 crc kubenswrapper[4769]: E1006 07:18:03.165739 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:03 crc kubenswrapper[4769]: E1006 07:18:03.165841 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.225407 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.225497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.225507 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.225521 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.225530 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:03Z","lastTransitionTime":"2025-10-06T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.328587 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.328643 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.328663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.328686 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.328704 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:03Z","lastTransitionTime":"2025-10-06T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.431765 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.431801 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.431811 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.431827 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.431836 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:03Z","lastTransitionTime":"2025-10-06T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.535110 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.535165 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.535182 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.535205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.535223 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:03Z","lastTransitionTime":"2025-10-06T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.638474 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.638806 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.638900 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.638995 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.639083 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:03Z","lastTransitionTime":"2025-10-06T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.741645 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.741715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.741738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.741767 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.741789 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:03Z","lastTransitionTime":"2025-10-06T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.845147 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.845219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.845238 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.845262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.845279 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:03Z","lastTransitionTime":"2025-10-06T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.947956 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.948002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.948017 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.948032 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:03 crc kubenswrapper[4769]: I1006 07:18:03.948044 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:03Z","lastTransitionTime":"2025-10-06T07:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.050811 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.050865 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.050884 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.050906 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.050925 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:04Z","lastTransitionTime":"2025-10-06T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.153725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.153760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.153769 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.153782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.153791 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:04Z","lastTransitionTime":"2025-10-06T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.165502 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:04 crc kubenswrapper[4769]: E1006 07:18:04.165639 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.186523 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:54Z\\\",\\\"message\\\":\\\"2025-10-06T07:17:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5\\\\n2025-10-06T07:17:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5 to /host/opt/cni/bin/\\\\n2025-10-06T07:17:09Z [verbose] multus-daemon started\\\\n2025-10-06T07:17:09Z [verbose] Readiness Indicator file check\\\\n2025-10-06T07:17:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.199717 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.216728 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.229569 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.243468 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.256008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.256049 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.256060 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.256075 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.256085 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:04Z","lastTransitionTime":"2025-10-06T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.257367 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e287f5-b174-4596-b1ff-af5378e54fe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.269916 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.280830 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.292389 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.303688 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.318410 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.339614 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dec66d4-a680-4dff-8583-1cd86ae56d1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1be61ede3642b697eb5935aa7fc86cf6b91fb82c20e6ad664ded64cd1e044d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.353517 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.358432 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.358462 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.358471 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.358484 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.358494 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:04Z","lastTransitionTime":"2025-10-06T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.373605 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:18:01Z\\\",\\\"message\\\":\\\" openshift-apiserver/api for network=default are: map[]\\\\nI1006 07:18:01.081408 6803 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1006 07:18:01.081442 6803 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1006 07:18:01.081455 6803 lb_config.go:1031] Cluster endpoints for openshift-kube-scheduler-operator/metrics for network=default are: map[]\\\\nF1006 07:18:01.081460 6803 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z]\\\\nI1006 07:18:01.081469 6803 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:18:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.389899 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.398871 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.413129 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.424741 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:04Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.460388 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.460435 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.460447 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.460463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.460477 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:04Z","lastTransitionTime":"2025-10-06T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.563653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.563692 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.563703 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.563717 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.563727 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:04Z","lastTransitionTime":"2025-10-06T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.666470 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.666791 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.666808 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.666831 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.666852 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:04Z","lastTransitionTime":"2025-10-06T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.769309 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.769345 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.769355 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.769369 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.769379 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:04Z","lastTransitionTime":"2025-10-06T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.871396 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.871667 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.871760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.871860 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.871963 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:04Z","lastTransitionTime":"2025-10-06T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.973806 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.973869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.973885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.973912 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:04 crc kubenswrapper[4769]: I1006 07:18:04.973928 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:04Z","lastTransitionTime":"2025-10-06T07:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.076441 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.076468 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.076477 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.076490 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.076498 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:05Z","lastTransitionTime":"2025-10-06T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.164856 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:05 crc kubenswrapper[4769]: E1006 07:18:05.165198 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.165112 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:05 crc kubenswrapper[4769]: E1006 07:18:05.165484 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.165092 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:05 crc kubenswrapper[4769]: E1006 07:18:05.165697 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.178372 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.178407 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.178415 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.178452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.178465 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:05Z","lastTransitionTime":"2025-10-06T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.280666 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.280697 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.280705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.280733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.280742 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:05Z","lastTransitionTime":"2025-10-06T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.383307 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.383351 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.383392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.383408 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.383445 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:05Z","lastTransitionTime":"2025-10-06T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.486007 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.486079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.486091 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.486131 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.486143 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:05Z","lastTransitionTime":"2025-10-06T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.588910 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.588953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.588967 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.588986 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.588999 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:05Z","lastTransitionTime":"2025-10-06T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.692681 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.692744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.692761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.692810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.692831 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:05Z","lastTransitionTime":"2025-10-06T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.795874 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.796143 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.796246 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.796344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.796463 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:05Z","lastTransitionTime":"2025-10-06T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.898832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.898878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.898895 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.898916 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:05 crc kubenswrapper[4769]: I1006 07:18:05.898933 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:05Z","lastTransitionTime":"2025-10-06T07:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.001879 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.002508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.002543 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.002566 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.002582 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:06Z","lastTransitionTime":"2025-10-06T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.105013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.105066 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.105084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.105112 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.105130 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:06Z","lastTransitionTime":"2025-10-06T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.165871 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:06 crc kubenswrapper[4769]: E1006 07:18:06.166237 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.207746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.207782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.207794 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.207808 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.207832 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:06Z","lastTransitionTime":"2025-10-06T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.309688 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.309725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.309734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.309748 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.309760 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:06Z","lastTransitionTime":"2025-10-06T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.412055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.412111 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.412127 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.412150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.412177 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:06Z","lastTransitionTime":"2025-10-06T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.514326 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.514395 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.514456 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.514489 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.514509 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:06Z","lastTransitionTime":"2025-10-06T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.617234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.617709 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.618214 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.618480 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.618723 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:06Z","lastTransitionTime":"2025-10-06T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.722554 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.722641 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.722657 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.722680 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.722697 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:06Z","lastTransitionTime":"2025-10-06T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.825602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.825635 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.825646 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.825662 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.825673 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:06Z","lastTransitionTime":"2025-10-06T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.928362 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.928398 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.928415 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.928472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:06 crc kubenswrapper[4769]: I1006 07:18:06.928486 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:06Z","lastTransitionTime":"2025-10-06T07:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.030974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.031009 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.031016 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.031029 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.031038 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:07Z","lastTransitionTime":"2025-10-06T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.133829 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.133878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.133886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.133898 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.133907 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:07Z","lastTransitionTime":"2025-10-06T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.165839 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.165902 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.165842 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:07 crc kubenswrapper[4769]: E1006 07:18:07.166015 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:07 crc kubenswrapper[4769]: E1006 07:18:07.166116 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:07 crc kubenswrapper[4769]: E1006 07:18:07.166196 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.236169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.236200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.236212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.236229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.236241 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:07Z","lastTransitionTime":"2025-10-06T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.338536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.338576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.338588 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.338605 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.338619 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:07Z","lastTransitionTime":"2025-10-06T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.441943 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.442240 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.442340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.442491 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.442606 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:07Z","lastTransitionTime":"2025-10-06T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.544525 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.544915 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.545073 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.545230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.545387 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:07Z","lastTransitionTime":"2025-10-06T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.648554 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.648615 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.648627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.648642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.648650 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:07Z","lastTransitionTime":"2025-10-06T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.752054 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.752094 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.752105 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.752146 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.752158 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:07Z","lastTransitionTime":"2025-10-06T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.855593 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.855658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.855670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.855687 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.855699 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:07Z","lastTransitionTime":"2025-10-06T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.959253 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.959292 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.959304 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.959320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:07 crc kubenswrapper[4769]: I1006 07:18:07.959331 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:07Z","lastTransitionTime":"2025-10-06T07:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.062166 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.062207 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.062252 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.062270 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.062279 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:08Z","lastTransitionTime":"2025-10-06T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.165051 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.165232 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.165324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.165358 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.165367 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.165384 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.165396 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:08Z","lastTransitionTime":"2025-10-06T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.268911 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.268971 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.268990 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.269014 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.269031 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:08Z","lastTransitionTime":"2025-10-06T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.322847 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.323145 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.323109565 +0000 UTC m=+148.847390752 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.371865 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.371946 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.371969 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.371999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.372022 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:08Z","lastTransitionTime":"2025-10-06T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.423977 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.424041 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.424078 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.424124 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.424151 4769 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.424269 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.424238527 +0000 UTC m=+148.948519744 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.424273 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.424293 4769 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.424314 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.424321 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.424337 4769 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.424354 4769 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.424361 4769 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.424383 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.424362611 +0000 UTC m=+148.948643798 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.424411 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.424397862 +0000 UTC m=+148.948679049 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:18:08 crc kubenswrapper[4769]: E1006 07:18:08.424483 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.424456353 +0000 UTC m=+148.948737540 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.475216 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.475275 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.475292 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.475315 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.475333 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:08Z","lastTransitionTime":"2025-10-06T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.578127 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.578194 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.578214 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.578243 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.578260 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:08Z","lastTransitionTime":"2025-10-06T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.681238 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.681289 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.681304 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.681326 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.681344 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:08Z","lastTransitionTime":"2025-10-06T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.784494 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.784561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.784575 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.784589 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.784598 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:08Z","lastTransitionTime":"2025-10-06T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.886946 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.886982 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.886991 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.887042 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.887070 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:08Z","lastTransitionTime":"2025-10-06T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.990148 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.990538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.990708 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.990837 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:08 crc kubenswrapper[4769]: I1006 07:18:08.990945 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:08Z","lastTransitionTime":"2025-10-06T07:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.094023 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.094696 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.094822 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.094916 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.095002 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:09Z","lastTransitionTime":"2025-10-06T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.165848 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:09 crc kubenswrapper[4769]: E1006 07:18:09.166023 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.166049 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.166130 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:09 crc kubenswrapper[4769]: E1006 07:18:09.166676 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:09 crc kubenswrapper[4769]: E1006 07:18:09.167177 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.197476 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.197783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.197856 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.197928 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.197987 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:09Z","lastTransitionTime":"2025-10-06T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.300738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.300806 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.300821 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.300838 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.300852 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:09Z","lastTransitionTime":"2025-10-06T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.403759 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.403808 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.403820 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.403838 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.403851 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:09Z","lastTransitionTime":"2025-10-06T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.506071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.506114 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.506128 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.506145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.506155 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:09Z","lastTransitionTime":"2025-10-06T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.609275 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.609324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.609340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.609365 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.609382 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:09Z","lastTransitionTime":"2025-10-06T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.712224 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.712285 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.712302 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.712326 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.712343 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:09Z","lastTransitionTime":"2025-10-06T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.814694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.814749 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.814766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.814788 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.814804 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:09Z","lastTransitionTime":"2025-10-06T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.917279 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.917327 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.917339 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.917357 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:09 crc kubenswrapper[4769]: I1006 07:18:09.917369 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:09Z","lastTransitionTime":"2025-10-06T07:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.020907 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.020994 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.021009 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.021033 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.021048 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:10Z","lastTransitionTime":"2025-10-06T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.124653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.124710 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.124728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.124753 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.124770 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:10Z","lastTransitionTime":"2025-10-06T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.165826 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:10 crc kubenswrapper[4769]: E1006 07:18:10.165935 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.226667 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.226708 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.226728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.226745 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.226756 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:10Z","lastTransitionTime":"2025-10-06T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.329406 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.329454 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.329463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.329475 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.329533 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:10Z","lastTransitionTime":"2025-10-06T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.431865 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.431921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.431938 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.431961 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.431978 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:10Z","lastTransitionTime":"2025-10-06T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.535036 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.535539 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.535763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.535951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.536114 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:10Z","lastTransitionTime":"2025-10-06T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.639461 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.639499 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.639508 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.639521 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.639531 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:10Z","lastTransitionTime":"2025-10-06T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.741560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.741623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.741646 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.741678 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.741701 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:10Z","lastTransitionTime":"2025-10-06T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.844721 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.844754 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.844762 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.844776 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.844784 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:10Z","lastTransitionTime":"2025-10-06T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.947540 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.947820 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.947886 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.947975 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:10 crc kubenswrapper[4769]: I1006 07:18:10.948066 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:10Z","lastTransitionTime":"2025-10-06T07:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.050853 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.050914 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.050931 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.050958 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.050976 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:11Z","lastTransitionTime":"2025-10-06T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.153896 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.153942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.153959 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.153983 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.154003 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:11Z","lastTransitionTime":"2025-10-06T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.164911 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.164958 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:11 crc kubenswrapper[4769]: E1006 07:18:11.165059 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.164919 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:11 crc kubenswrapper[4769]: E1006 07:18:11.165196 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:11 crc kubenswrapper[4769]: E1006 07:18:11.165289 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.257078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.257299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.257390 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.257481 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.257552 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:11Z","lastTransitionTime":"2025-10-06T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.359472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.360067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.360335 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.360581 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.360801 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:11Z","lastTransitionTime":"2025-10-06T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.463501 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.463783 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.464005 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.464246 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.464472 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:11Z","lastTransitionTime":"2025-10-06T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.567649 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.567901 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.568043 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.568173 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.568314 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:11Z","lastTransitionTime":"2025-10-06T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.671877 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.671939 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.671957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.671984 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.672002 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:11Z","lastTransitionTime":"2025-10-06T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.774744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.775107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.775260 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.775392 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.775545 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:11Z","lastTransitionTime":"2025-10-06T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.877994 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.878051 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.878063 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.878079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.878114 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:11Z","lastTransitionTime":"2025-10-06T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.980697 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.980749 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.980763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.980782 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:11 crc kubenswrapper[4769]: I1006 07:18:11.980795 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:11Z","lastTransitionTime":"2025-10-06T07:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.083746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.083787 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.083821 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.083840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.083852 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:12Z","lastTransitionTime":"2025-10-06T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.165395 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:12 crc kubenswrapper[4769]: E1006 07:18:12.166669 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.186507 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.186887 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.187155 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.187503 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.187827 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:12Z","lastTransitionTime":"2025-10-06T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.291137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.291384 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.291606 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.291734 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.291850 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:12Z","lastTransitionTime":"2025-10-06T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.395742 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.395840 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.395858 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.395929 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.395948 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:12Z","lastTransitionTime":"2025-10-06T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.498415 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.498512 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.498537 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.498569 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.498590 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:12Z","lastTransitionTime":"2025-10-06T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.600929 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.600966 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.600977 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.600995 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.601007 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:12Z","lastTransitionTime":"2025-10-06T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.703178 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.703224 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.703241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.703259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.703271 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:12Z","lastTransitionTime":"2025-10-06T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.805901 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.805968 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.805991 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.806020 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.806042 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:12Z","lastTransitionTime":"2025-10-06T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.908366 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.908452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.908464 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.908481 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.908492 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:12Z","lastTransitionTime":"2025-10-06T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.953217 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.953308 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.953335 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.953368 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.953392 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:12Z","lastTransitionTime":"2025-10-06T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:12 crc kubenswrapper[4769]: E1006 07:18:12.970288 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.974367 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.974472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.974492 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.974516 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.974536 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:12Z","lastTransitionTime":"2025-10-06T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:12 crc kubenswrapper[4769]: E1006 07:18:12.993393 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:12Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.997728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.997762 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.997776 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.997796 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:12 crc kubenswrapper[4769]: I1006 07:18:12.997812 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:12Z","lastTransitionTime":"2025-10-06T07:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:13 crc kubenswrapper[4769]: E1006 07:18:13.017661 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.022660 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.022694 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.022728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.022744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.022756 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:13Z","lastTransitionTime":"2025-10-06T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:13 crc kubenswrapper[4769]: E1006 07:18:13.039216 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.043280 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.043317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.043329 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.043343 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.043354 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:13Z","lastTransitionTime":"2025-10-06T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:13 crc kubenswrapper[4769]: E1006 07:18:13.062625 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:13Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:13 crc kubenswrapper[4769]: E1006 07:18:13.062774 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.064718 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.064790 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.064801 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.064816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.064827 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:13Z","lastTransitionTime":"2025-10-06T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.166009 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.166009 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:13 crc kubenswrapper[4769]: E1006 07:18:13.166152 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.166267 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:13 crc kubenswrapper[4769]: E1006 07:18:13.166444 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:13 crc kubenswrapper[4769]: E1006 07:18:13.166543 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.167065 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.167103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.167126 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.167144 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.167159 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:13Z","lastTransitionTime":"2025-10-06T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.269856 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.269907 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.269922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.269939 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.269953 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:13Z","lastTransitionTime":"2025-10-06T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.373104 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.373181 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.373200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.373224 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.373244 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:13Z","lastTransitionTime":"2025-10-06T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.475394 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.475487 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.475505 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.475533 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.475551 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:13Z","lastTransitionTime":"2025-10-06T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.581229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.581285 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.581304 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.581328 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.581346 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:13Z","lastTransitionTime":"2025-10-06T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.684117 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.684150 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.684158 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.684172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.684183 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:13Z","lastTransitionTime":"2025-10-06T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.787877 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.787960 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.787977 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.788472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.788512 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:13Z","lastTransitionTime":"2025-10-06T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.891774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.891813 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.891826 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.891844 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.891855 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:13Z","lastTransitionTime":"2025-10-06T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.994967 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.995000 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.995010 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.995026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:13 crc kubenswrapper[4769]: I1006 07:18:13.995037 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:13Z","lastTransitionTime":"2025-10-06T07:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.097518 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.097609 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.097643 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.097674 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.097695 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:14Z","lastTransitionTime":"2025-10-06T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.165372 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:14 crc kubenswrapper[4769]: E1006 07:18:14.165624 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.181958 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e287f5-b174-4596-b1ff-af5378e54fe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.200495 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.200566 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.200586 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.200614 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.200633 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:14Z","lastTransitionTime":"2025-10-06T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.205934 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.224846 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.238387 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.256803 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.273711 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dec66d4-a680-4dff-8583-1cd86ae56d1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1be61ede3642b697eb5935aa7fc86cf6b91fb82c20e6ad664ded64cd1e044d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.287047 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.303031 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.303084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.303102 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.303133 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.303156 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:14Z","lastTransitionTime":"2025-10-06T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.306354 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:18:01Z\\\",\\\"message\\\":\\\" openshift-apiserver/api for network=default are: map[]\\\\nI1006 07:18:01.081408 6803 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1006 07:18:01.081442 6803 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1006 07:18:01.081455 6803 lb_config.go:1031] Cluster endpoints for openshift-kube-scheduler-operator/metrics for network=default are: map[]\\\\nF1006 07:18:01.081460 6803 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z]\\\\nI1006 07:18:01.081469 6803 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:18:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.319300 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.342356 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.359884 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.382943 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.395613 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.406247 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.406328 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.406343 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.406366 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.406381 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:14Z","lastTransitionTime":"2025-10-06T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.412022 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.430949 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.479200 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.497944 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.509064 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.509105 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.509113 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.509148 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.509157 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:14Z","lastTransitionTime":"2025-10-06T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.513344 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:54Z\\\",\\\"message\\\":\\\"2025-10-06T07:17:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5\\\\n2025-10-06T07:17:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5 to /host/opt/cni/bin/\\\\n2025-10-06T07:17:09Z [verbose] multus-daemon started\\\\n2025-10-06T07:17:09Z [verbose] Readiness Indicator file check\\\\n2025-10-06T07:17:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:14Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.610761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.610807 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.610823 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.610846 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.610863 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:14Z","lastTransitionTime":"2025-10-06T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.713595 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.713648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.713658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.713673 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.713684 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:14Z","lastTransitionTime":"2025-10-06T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.816611 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.816643 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.816652 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.816664 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.816672 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:14Z","lastTransitionTime":"2025-10-06T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.918836 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.918892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.918910 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.918934 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:14 crc kubenswrapper[4769]: I1006 07:18:14.918950 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:14Z","lastTransitionTime":"2025-10-06T07:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.021379 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.021504 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.021528 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.021570 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.021593 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:15Z","lastTransitionTime":"2025-10-06T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.124526 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.124560 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.124571 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.124585 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.124594 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:15Z","lastTransitionTime":"2025-10-06T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.165522 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.165585 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.165525 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:15 crc kubenswrapper[4769]: E1006 07:18:15.165694 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:15 crc kubenswrapper[4769]: E1006 07:18:15.165834 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:15 crc kubenswrapper[4769]: E1006 07:18:15.165985 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.166705 4769 scope.go:117] "RemoveContainer" containerID="f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629" Oct 06 07:18:15 crc kubenswrapper[4769]: E1006 07:18:15.166836 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.226684 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.226772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.226795 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.226823 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.226842 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:15Z","lastTransitionTime":"2025-10-06T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.329914 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.329954 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.329969 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.329993 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.330009 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:15Z","lastTransitionTime":"2025-10-06T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.432293 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.432369 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.432391 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.432418 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.432470 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:15Z","lastTransitionTime":"2025-10-06T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.535977 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.536076 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.536095 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.536120 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.536172 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:15Z","lastTransitionTime":"2025-10-06T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.639839 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.639892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.639909 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.639930 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.639942 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:15Z","lastTransitionTime":"2025-10-06T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.742544 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.742624 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.742653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.742688 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.742715 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:15Z","lastTransitionTime":"2025-10-06T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.845096 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.845148 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.845160 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.845172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.845181 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:15Z","lastTransitionTime":"2025-10-06T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.947522 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.947563 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.947574 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.947591 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:15 crc kubenswrapper[4769]: I1006 07:18:15.947601 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:15Z","lastTransitionTime":"2025-10-06T07:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.049334 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.049368 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.049379 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.049394 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.049405 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:16Z","lastTransitionTime":"2025-10-06T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.151609 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.151636 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.151647 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.151663 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.151673 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:16Z","lastTransitionTime":"2025-10-06T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.165784 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:16 crc kubenswrapper[4769]: E1006 07:18:16.165887 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.253805 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.253839 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.253847 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.253860 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.253868 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:16Z","lastTransitionTime":"2025-10-06T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.355588 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.355623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.355632 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.355646 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.355658 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:16Z","lastTransitionTime":"2025-10-06T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.458592 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.458625 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.458636 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.458650 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.458660 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:16Z","lastTransitionTime":"2025-10-06T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.560880 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.560918 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.560929 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.560944 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.560954 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:16Z","lastTransitionTime":"2025-10-06T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.663682 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.663717 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.663728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.663744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.663756 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:16Z","lastTransitionTime":"2025-10-06T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.765910 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.765974 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.765999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.766027 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.766048 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:16Z","lastTransitionTime":"2025-10-06T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.868652 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.868716 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.868732 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.868755 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.868771 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:16Z","lastTransitionTime":"2025-10-06T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.970935 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.971031 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.971055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.971141 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:16 crc kubenswrapper[4769]: I1006 07:18:16.971182 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:16Z","lastTransitionTime":"2025-10-06T07:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.073949 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.073989 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.074000 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.074016 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.074024 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:17Z","lastTransitionTime":"2025-10-06T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.165460 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.165480 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:17 crc kubenswrapper[4769]: E1006 07:18:17.165632 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.165716 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:17 crc kubenswrapper[4769]: E1006 07:18:17.165750 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:17 crc kubenswrapper[4769]: E1006 07:18:17.165968 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.176808 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.176860 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.176873 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.176890 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.176903 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:17Z","lastTransitionTime":"2025-10-06T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.279735 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.279778 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.279791 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.279807 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.279820 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:17Z","lastTransitionTime":"2025-10-06T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.382472 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.382528 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.382536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.382550 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.382559 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:17Z","lastTransitionTime":"2025-10-06T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.485602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.485645 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.485655 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.485691 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.485704 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:17Z","lastTransitionTime":"2025-10-06T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.587776 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.587816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.587824 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.587844 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.587853 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:17Z","lastTransitionTime":"2025-10-06T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.693551 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.693623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.693637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.693656 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.693668 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:17Z","lastTransitionTime":"2025-10-06T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.796718 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.796768 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.796780 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.796799 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.796810 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:17Z","lastTransitionTime":"2025-10-06T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.899279 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.899344 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.899366 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.899395 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:17 crc kubenswrapper[4769]: I1006 07:18:17.899423 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:17Z","lastTransitionTime":"2025-10-06T07:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.002996 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.003071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.003094 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.003124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.003145 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:18Z","lastTransitionTime":"2025-10-06T07:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.106459 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.106518 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.106536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.106562 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.106580 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:18Z","lastTransitionTime":"2025-10-06T07:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.165828 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:18 crc kubenswrapper[4769]: E1006 07:18:18.166086 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.210994 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.211071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.211090 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.211122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.211147 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:18Z","lastTransitionTime":"2025-10-06T07:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.314686 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.314766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.314786 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.314821 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.314843 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:18Z","lastTransitionTime":"2025-10-06T07:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.418760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.418832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.418854 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.418883 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.418907 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:18Z","lastTransitionTime":"2025-10-06T07:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.521740 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.521797 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.521814 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.521838 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.521855 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:18Z","lastTransitionTime":"2025-10-06T07:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.624571 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.624644 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.624665 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.624699 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.624721 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:18Z","lastTransitionTime":"2025-10-06T07:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.727715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.727785 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.727810 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.727841 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.727862 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:18Z","lastTransitionTime":"2025-10-06T07:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.830855 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.830938 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.830961 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.830990 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.831009 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:18Z","lastTransitionTime":"2025-10-06T07:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.934568 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.934618 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.934629 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.934646 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:18 crc kubenswrapper[4769]: I1006 07:18:18.934661 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:18Z","lastTransitionTime":"2025-10-06T07:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.037221 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.037304 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.037322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.037350 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.037373 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:19Z","lastTransitionTime":"2025-10-06T07:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.140497 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.140541 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.140553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.140576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.140587 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:19Z","lastTransitionTime":"2025-10-06T07:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.165136 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.165210 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.165256 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:19 crc kubenswrapper[4769]: E1006 07:18:19.165279 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:19 crc kubenswrapper[4769]: E1006 07:18:19.165455 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:19 crc kubenswrapper[4769]: E1006 07:18:19.165608 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.243966 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.244011 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.244027 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.244043 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.244053 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:19Z","lastTransitionTime":"2025-10-06T07:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.346987 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.347037 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.347049 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.347067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.347079 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:19Z","lastTransitionTime":"2025-10-06T07:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.449732 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.449803 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.449816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.449834 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.449846 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:19Z","lastTransitionTime":"2025-10-06T07:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.551919 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.551973 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.551982 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.551999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.552008 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:19Z","lastTransitionTime":"2025-10-06T07:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.654697 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.654739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.654750 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.654763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.654773 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:19Z","lastTransitionTime":"2025-10-06T07:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.757002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.757042 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.757058 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.757073 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.757082 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:19Z","lastTransitionTime":"2025-10-06T07:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.859948 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.859986 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.859995 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.860008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.860017 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:19Z","lastTransitionTime":"2025-10-06T07:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.963746 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.963785 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.963794 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.963809 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:19 crc kubenswrapper[4769]: I1006 07:18:19.963818 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:19Z","lastTransitionTime":"2025-10-06T07:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.066478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.066519 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.066527 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.066545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.066555 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:20Z","lastTransitionTime":"2025-10-06T07:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.165327 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:20 crc kubenswrapper[4769]: E1006 07:18:20.165498 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.169265 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.169301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.169314 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.169330 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.169342 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:20Z","lastTransitionTime":"2025-10-06T07:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.271762 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.271832 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.271850 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.271875 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.271894 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:20Z","lastTransitionTime":"2025-10-06T07:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.374200 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.374268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.374287 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.374313 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.374332 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:20Z","lastTransitionTime":"2025-10-06T07:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.478031 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.478086 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.478179 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.478220 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.478293 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:20Z","lastTransitionTime":"2025-10-06T07:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.581124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.581190 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.581216 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.581245 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.581271 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:20Z","lastTransitionTime":"2025-10-06T07:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.684164 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.684261 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.684297 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.684335 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.684510 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:20Z","lastTransitionTime":"2025-10-06T07:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.786870 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.786944 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.786965 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.786995 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.787017 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:20Z","lastTransitionTime":"2025-10-06T07:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.890394 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.890493 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.890510 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.890534 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.890551 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:20Z","lastTransitionTime":"2025-10-06T07:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.993845 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.993902 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.993921 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.994007 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:20 crc kubenswrapper[4769]: I1006 07:18:20.994039 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:20Z","lastTransitionTime":"2025-10-06T07:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.096739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.096789 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.096802 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.096819 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.096831 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:21Z","lastTransitionTime":"2025-10-06T07:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.165031 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.165110 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.165046 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:21 crc kubenswrapper[4769]: E1006 07:18:21.165224 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:21 crc kubenswrapper[4769]: E1006 07:18:21.165457 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:21 crc kubenswrapper[4769]: E1006 07:18:21.165570 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.200059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.200129 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.200147 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.200175 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.200207 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:21Z","lastTransitionTime":"2025-10-06T07:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.303131 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.303213 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.303234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.303265 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.303289 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:21Z","lastTransitionTime":"2025-10-06T07:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.407099 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.407142 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.407153 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.407170 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.407179 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:21Z","lastTransitionTime":"2025-10-06T07:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.509892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.509942 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.509951 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.509965 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.509973 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:21Z","lastTransitionTime":"2025-10-06T07:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.613075 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.613116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.613129 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.613146 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.613155 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:21Z","lastTransitionTime":"2025-10-06T07:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.714894 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.714946 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.714961 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.714984 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.714999 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:21Z","lastTransitionTime":"2025-10-06T07:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.817040 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.817069 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.817079 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.817093 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.817102 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:21Z","lastTransitionTime":"2025-10-06T07:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.919539 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.919572 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.919581 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.919595 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:21 crc kubenswrapper[4769]: I1006 07:18:21.919603 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:21Z","lastTransitionTime":"2025-10-06T07:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.021877 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.021934 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.021943 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.021955 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.021963 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:22Z","lastTransitionTime":"2025-10-06T07:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.124940 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.124981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.124992 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.125012 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.125024 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:22Z","lastTransitionTime":"2025-10-06T07:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.165282 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:22 crc kubenswrapper[4769]: E1006 07:18:22.165472 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.227212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.227254 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.227263 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.227277 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.227287 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:22Z","lastTransitionTime":"2025-10-06T07:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.330001 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.330035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.330044 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.330057 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.330068 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:22Z","lastTransitionTime":"2025-10-06T07:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.432516 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.432553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.432596 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.432611 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.432620 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:22Z","lastTransitionTime":"2025-10-06T07:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.536299 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.536332 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.536340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.536352 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.536360 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:22Z","lastTransitionTime":"2025-10-06T07:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.639645 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.639716 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.639733 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.639756 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.639773 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:22Z","lastTransitionTime":"2025-10-06T07:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.741654 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.741711 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.741727 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.741749 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.741765 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:22Z","lastTransitionTime":"2025-10-06T07:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.844781 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.844869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.844878 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.844898 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.844909 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:22Z","lastTransitionTime":"2025-10-06T07:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.946970 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.947021 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.947035 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.947056 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:22 crc kubenswrapper[4769]: I1006 07:18:22.947068 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:22Z","lastTransitionTime":"2025-10-06T07:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.049221 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.049333 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.049351 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.049369 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.049381 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.074538 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.074579 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.074587 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.074600 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.074608 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: E1006 07:18:23.086297 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:23Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.089219 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.089258 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.089268 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.089280 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.089288 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: E1006 07:18:23.103003 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:23Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.109439 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.109490 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.109504 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.109523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.109537 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: E1006 07:18:23.121801 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:23Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.125573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.125607 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.125616 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.125633 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.125646 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: E1006 07:18:23.137672 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:23Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.141012 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.141044 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.141053 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.141067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.141078 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: E1006 07:18:23.152764 4769 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-06T07:18:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"cdc8e157-8825-4eb6-bd1e-19bb6087ad55\\\",\\\"systemUUID\\\":\\\"dd6f6a0d-d0ec-448e-a352-aa70e3f0b94f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:23Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:23 crc kubenswrapper[4769]: E1006 07:18:23.152936 4769 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.154409 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.154503 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.154517 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.154534 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.154639 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.165226 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:23 crc kubenswrapper[4769]: E1006 07:18:23.165327 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.165226 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.165378 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:23 crc kubenswrapper[4769]: E1006 07:18:23.165405 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:23 crc kubenswrapper[4769]: E1006 07:18:23.165622 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.256388 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.256458 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.256470 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.256484 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.256494 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.359296 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.359354 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.359369 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.359400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.359416 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.462324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.462376 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.462396 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.462422 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.462454 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.565172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.565237 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.565250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.565274 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.565289 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.668069 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.668123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.668134 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.668151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.668165 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.770078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.770127 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.770137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.770154 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.770163 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.874040 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.874091 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.874107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.874126 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.874137 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.976659 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.976720 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.976738 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.976763 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:23 crc kubenswrapper[4769]: I1006 07:18:23.976782 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:23Z","lastTransitionTime":"2025-10-06T07:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.079801 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.079890 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.079936 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.079971 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.080012 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:24Z","lastTransitionTime":"2025-10-06T07:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.165907 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:24 crc kubenswrapper[4769]: E1006 07:18:24.166226 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.187157 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be6d75d33335fb12198b5d9b4ad54d53af37bc3aec9fce19fe3e440dc9edd990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3125506eb7613c1a97537910219d3c661683b2a1455abc48470d6d248d28fa91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.187725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.187768 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.187781 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.187798 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.187808 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:24Z","lastTransitionTime":"2025-10-06T07:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.201576 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:24 crc kubenswrapper[4769]: E1006 07:18:24.201855 4769 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:18:24 crc kubenswrapper[4769]: E1006 07:18:24.201981 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs podName:cbddd0e8-9d17-4278-acdc-e35d2d8d70f9 nodeName:}" failed. No retries permitted until 2025-10-06 07:19:28.201951749 +0000 UTC m=+164.726233086 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs") pod "network-metrics-daemon-wxwxs" (UID: "cbddd0e8-9d17-4278-acdc-e35d2d8d70f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.204771 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s8l5j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"32047d07-7551-41a0-8669-c5ee1674290c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5cc6fa2e3df26e842bb902c7ebbc3b3da73b5a8288a9d4dbca6f01f7ba564746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-66f8k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s8l5j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.226492 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bq98f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25975c2-003c-4557-902c-2ccbc18d0881\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://856de542a23c60f8f975182ad56031e2a6f42fb85658e8ea485dd066679e91ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67855241d47d0481fd578cd53a10972f8dbca90b930c2d52ac92c39c45b44f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d51e811df9e332c83ebfc54704dda6886ec111d52c0ef31f7b95a4dac25a731d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59c2988ff925a3f3c504edab0d31725f8498378b9fe977e7b84c1ebbbe46b341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7572b77fa45b12be8f50d1a01b35ebfc2e3a28719e56c34458e85162ce58c58e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7876b885aa1e6b175e1a1ce1cb977759e6ded33a638578a26f17e90a9ba6f0d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://67177fb5e3e1205f700a078f71fe4603b48d2704998dee4cd2bc8e3ba60cce9a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5c8gm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bq98f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.240970 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b48h2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f3d333f-88b0-49d3-a503-ee2c3a48c17b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a35d40bd6cd9f49789ecf5114f7f1dceeda120d8b06a2562ea0c1303392416c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vnwcb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b48h2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.260873 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.280161 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a197468752ae39c9e2fe30609e782043f2aaf2b20c0cae85b2c20e0400fca0e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.291869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.291950 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.291990 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.292012 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.292025 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:24Z","lastTransitionTime":"2025-10-06T07:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.301607 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e47ce01981b1d0986d4c13045590d27cfdb9bc98f04d4f7318675b962f73313\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.317394 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cjjvp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b98abd5-990e-494c-a2a5-526fae1bd5ec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:17:54Z\\\",\\\"message\\\":\\\"2025-10-06T07:17:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5\\\\n2025-10-06T07:17:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_f253b1d7-ad8d-43c7-93ca-b669040563c5 to /host/opt/cni/bin/\\\\n2025-10-06T07:17:09Z [verbose] multus-daemon started\\\\n2025-10-06T07:17:09Z [verbose] Readiness Indicator file check\\\\n2025-10-06T07:17:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p8lpx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cjjvp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.331557 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2c48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wxwxs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.345087 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67e287f5-b174-4596-b1ff-af5378e54fe6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8cbd0d89b9d5908bb67ced63fb2514a0b56cb008c32239522090fabeb3ecee4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b5f0e3593f8b1c3a3a41e882a2ab79b554a69f41fd1c2219c796bcb1d464cdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229fb80415c7ff313ddda8bd391e263bd7aea247d2f0ae93781b8305af8a5e66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://95d0c601bc6f2b6639f29aaa81d18de17b9375b5cd9c97d334b83efc98ad4533\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.362469 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5321921-a8cd-48c8-95e2-e59804c23c13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7770996a14d7eeee9bac2974d52e13200e22984db016dc09c23f1eb69dbef753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0b5c387bd5c5a691d0bfe81ae797d8b786747bb3b55692215c849b1612df53d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7477ccb3562fd404af3847d6cf888ae8cb885fd1d1f1851ee5a17d4a1f1a685c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be2ad8712114a084241dc9d3813b8e14416f1207bb1bff5a0aca528c6edb963f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2952beaef1ddfc3bdf1a56af80648650702e7e7c7e164fb376a0b105dcad6096\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54c7d163fe842fb97dd3c1dfc4f6167da7fa25675468aa7759778f9ae5d4c5a4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.377564 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.390905 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff761ae3-3c80-40f1-9aff-ea1585a9199f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec6be9df8766a1627f7020787ecbd13511046657d8704d0c4ae66fd7129d9959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-97qtl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rlfqr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.396956 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.397002 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.397013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.397029 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.397040 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:24Z","lastTransitionTime":"2025-10-06T07:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.409237 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"113c5d9e-3eb4-4837-8ed0-da8f31925d9f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7580eae4bc10094c09c04d72d12e8880520dbb8dbe9ecea81afae4efa17dc473\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d752a2fbf832cb7a14e0e856ad663c0992c44190110f57b3a57b7d2dbf7182e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc2aeb6328ca4e25bea5180a171049df6d908bd199fdc6e8452d7dfcd0f51308\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://858b4c39cf8fa840260b4084a07367fa34f0476385643f5f3e6c32a8d605bb36\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.422803 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dec66d4-a680-4dff-8583-1cd86ae56d1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1be61ede3642b697eb5935aa7fc86cf6b91fb82c20e6ad664ded64cd1e044d11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:16:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7599b8140a0d3aa3a9c3dcf6702f1b70c0cb33852b70f4221eff3a313a4d1dde\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:16:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:16:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:16:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.435002 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:03Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.453719 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"084bbba5-5940-4065-a799-2e6baff2338d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-06T07:18:01Z\\\",\\\"message\\\":\\\" openshift-apiserver/api for network=default are: map[]\\\\nI1006 07:18:01.081408 6803 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1006 07:18:01.081442 6803 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1006 07:18:01.081455 6803 lb_config.go:1031] Cluster endpoints for openshift-kube-scheduler-operator/metrics for network=default are: map[]\\\\nF1006 07:18:01.081460 6803 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:01Z is after 2025-08-24T17:21:41Z]\\\\nI1006 07:18:01.081469 6803 obj_r\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-06T07:18:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-06T07:17:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-06T07:17:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sskw8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:06Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8bknc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.465538 4769 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bd0044-3318-436a-bd7f-f1e0268a30e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-06T07:17:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9cb3f57451c2f93187ee7f452a909e728af8659656be8db06d9cbee24f60e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022a71bcf7ba3ffcdc6b49ccb90b6e2351a2994a5daf06e93e874d222b2a6b75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-06T07:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64whz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-06T07:17:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-882lj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-06T07:18:24Z is after 2025-08-24T17:21:41Z" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.533283 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.533324 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.533334 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.533348 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.533357 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:24Z","lastTransitionTime":"2025-10-06T07:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.635737 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.635766 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.635774 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.635787 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.635799 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:24Z","lastTransitionTime":"2025-10-06T07:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.737689 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.737722 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.737739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.737756 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.737766 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:24Z","lastTransitionTime":"2025-10-06T07:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.840125 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.840164 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.840176 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.840193 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.840206 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:24Z","lastTransitionTime":"2025-10-06T07:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.942303 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.942345 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.942362 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.942381 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:24 crc kubenswrapper[4769]: I1006 07:18:24.942391 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:24Z","lastTransitionTime":"2025-10-06T07:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.044654 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.044699 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.044707 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.044725 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.044738 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:25Z","lastTransitionTime":"2025-10-06T07:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.146352 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.146440 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.146452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.146468 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.146482 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:25Z","lastTransitionTime":"2025-10-06T07:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.165875 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.165916 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.165993 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:25 crc kubenswrapper[4769]: E1006 07:18:25.166074 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:25 crc kubenswrapper[4769]: E1006 07:18:25.166592 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:25 crc kubenswrapper[4769]: E1006 07:18:25.166696 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.179095 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.248657 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.248690 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.248701 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.248716 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.248727 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:25Z","lastTransitionTime":"2025-10-06T07:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.350807 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.350845 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.350853 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.350867 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.350876 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:25Z","lastTransitionTime":"2025-10-06T07:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.453623 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.453659 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.453667 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.453684 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.453719 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:25Z","lastTransitionTime":"2025-10-06T07:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.556366 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.556489 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.556516 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.556545 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.556569 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:25Z","lastTransitionTime":"2025-10-06T07:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.659172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.659212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.659225 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.659241 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.659251 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:25Z","lastTransitionTime":"2025-10-06T07:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.761918 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.761957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.761967 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.761981 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.761989 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:25Z","lastTransitionTime":"2025-10-06T07:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.865082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.865124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.865132 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.865146 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.865156 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:25Z","lastTransitionTime":"2025-10-06T07:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.968814 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.969044 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.969055 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.969072 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:25 crc kubenswrapper[4769]: I1006 07:18:25.969094 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:25Z","lastTransitionTime":"2025-10-06T07:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.071913 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.071948 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.071957 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.071970 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.071979 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:26Z","lastTransitionTime":"2025-10-06T07:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.165722 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:26 crc kubenswrapper[4769]: E1006 07:18:26.165930 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.176204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.176301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.176320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.176382 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.176400 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:26Z","lastTransitionTime":"2025-10-06T07:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.279786 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.279874 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.279892 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.279922 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.279943 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:26Z","lastTransitionTime":"2025-10-06T07:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.383348 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.383463 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.383484 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.383511 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.383531 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:26Z","lastTransitionTime":"2025-10-06T07:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.487536 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.487598 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.487611 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.487631 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.487644 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:26Z","lastTransitionTime":"2025-10-06T07:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.590259 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.590307 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.590317 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.590335 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.590347 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:26Z","lastTransitionTime":"2025-10-06T07:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.693228 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.693272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.693281 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.693301 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.693312 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:26Z","lastTransitionTime":"2025-10-06T07:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.796795 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.796865 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.796884 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.796914 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.796935 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:26Z","lastTransitionTime":"2025-10-06T07:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.899743 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.899804 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.899816 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.899841 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:26 crc kubenswrapper[4769]: I1006 07:18:26.899856 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:26Z","lastTransitionTime":"2025-10-06T07:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.002901 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.002967 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.002984 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.003004 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.003019 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:27Z","lastTransitionTime":"2025-10-06T07:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.106101 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.106172 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.106196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.106229 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.106255 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:27Z","lastTransitionTime":"2025-10-06T07:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.165683 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.165796 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:27 crc kubenswrapper[4769]: E1006 07:18:27.165869 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.165881 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:27 crc kubenswrapper[4769]: E1006 07:18:27.166383 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.166516 4769 scope.go:117] "RemoveContainer" containerID="f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629" Oct 06 07:18:27 crc kubenswrapper[4769]: E1006 07:18:27.166610 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:27 crc kubenswrapper[4769]: E1006 07:18:27.166660 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8bknc_openshift-ovn-kubernetes(084bbba5-5940-4065-a799-2e6baff2338d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.209570 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.209629 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.209647 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.209671 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.209692 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:27Z","lastTransitionTime":"2025-10-06T07:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.312254 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.312330 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.312343 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.312364 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.312381 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:27Z","lastTransitionTime":"2025-10-06T07:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.415744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.415793 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.415802 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.415819 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.415832 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:27Z","lastTransitionTime":"2025-10-06T07:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.518961 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.518999 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.519008 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.519023 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.519032 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:27Z","lastTransitionTime":"2025-10-06T07:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.622797 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.622864 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.622885 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.622918 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.622941 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:27Z","lastTransitionTime":"2025-10-06T07:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.725391 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.725479 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.725496 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.725515 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.725526 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:27Z","lastTransitionTime":"2025-10-06T07:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.829523 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.829602 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.829622 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.829653 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.829677 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:27Z","lastTransitionTime":"2025-10-06T07:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.933577 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.933632 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.933648 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.933670 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:27 crc kubenswrapper[4769]: I1006 07:18:27.933683 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:27Z","lastTransitionTime":"2025-10-06T07:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.036402 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.036478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.036489 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.036507 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.036520 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:28Z","lastTransitionTime":"2025-10-06T07:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.139466 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.139520 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.139532 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.139553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.139566 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:28Z","lastTransitionTime":"2025-10-06T07:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.165170 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:28 crc kubenswrapper[4769]: E1006 07:18:28.165491 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.242715 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.242775 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.242789 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.242807 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.242818 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:28Z","lastTransitionTime":"2025-10-06T07:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.345341 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.345388 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.345400 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.345415 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.345444 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:28Z","lastTransitionTime":"2025-10-06T07:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.448059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.448103 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.448116 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.448134 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.448145 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:28Z","lastTransitionTime":"2025-10-06T07:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.550126 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.550187 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.550204 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.550230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.550252 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:28Z","lastTransitionTime":"2025-10-06T07:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.654625 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.654705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.654730 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.654760 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.654781 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:28Z","lastTransitionTime":"2025-10-06T07:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.756705 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.756754 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.756764 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.756778 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.756788 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:28Z","lastTransitionTime":"2025-10-06T07:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.859476 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.859531 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.859543 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.859559 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.859571 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:28Z","lastTransitionTime":"2025-10-06T07:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.962015 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.962059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.962071 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.962088 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:28 crc kubenswrapper[4769]: I1006 07:18:28.962098 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:28Z","lastTransitionTime":"2025-10-06T07:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.064151 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.064187 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.064197 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.064210 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.064219 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:29Z","lastTransitionTime":"2025-10-06T07:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.165676 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.165715 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.165715 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:29 crc kubenswrapper[4769]: E1006 07:18:29.165804 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:29 crc kubenswrapper[4769]: E1006 07:18:29.165991 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:29 crc kubenswrapper[4769]: E1006 07:18:29.166063 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.167128 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.167203 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.167222 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.167245 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.167263 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:29Z","lastTransitionTime":"2025-10-06T07:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.270026 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.270140 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.270212 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.270250 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.270321 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:29Z","lastTransitionTime":"2025-10-06T07:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.372574 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.372637 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.372658 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.372685 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.372707 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:29Z","lastTransitionTime":"2025-10-06T07:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.475340 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.475382 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.475397 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.475411 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.475446 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:29Z","lastTransitionTime":"2025-10-06T07:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.578532 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.578589 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.578610 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.578642 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.578663 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:29Z","lastTransitionTime":"2025-10-06T07:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.681206 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.681246 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.681256 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.681272 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.681284 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:29Z","lastTransitionTime":"2025-10-06T07:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.783361 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.783405 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.783416 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.783452 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.783465 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:29Z","lastTransitionTime":"2025-10-06T07:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.886013 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.886056 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.886067 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.886084 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.886095 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:29Z","lastTransitionTime":"2025-10-06T07:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.988255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.988296 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.988307 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.988322 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:29 crc kubenswrapper[4769]: I1006 07:18:29.988333 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:29Z","lastTransitionTime":"2025-10-06T07:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.091274 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.091320 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.091331 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.091348 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.091360 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:30Z","lastTransitionTime":"2025-10-06T07:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.165873 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:30 crc kubenswrapper[4769]: E1006 07:18:30.166121 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.194075 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.194114 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.194123 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.194137 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.194145 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:30Z","lastTransitionTime":"2025-10-06T07:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.296799 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.296853 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.296863 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.296875 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.296884 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:30Z","lastTransitionTime":"2025-10-06T07:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.399565 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.399617 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.399628 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.399645 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.399657 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:30Z","lastTransitionTime":"2025-10-06T07:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.502953 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.502996 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.503009 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.503023 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.503032 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:30Z","lastTransitionTime":"2025-10-06T07:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.606706 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.606776 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.606875 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.606903 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.606920 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:30Z","lastTransitionTime":"2025-10-06T07:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.710478 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.710561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.710576 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.710598 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.710611 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:30Z","lastTransitionTime":"2025-10-06T07:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.814276 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.814323 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.814332 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.814346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.814355 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:30Z","lastTransitionTime":"2025-10-06T07:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.917088 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.917124 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.917133 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.917145 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:30 crc kubenswrapper[4769]: I1006 07:18:30.917154 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:30Z","lastTransitionTime":"2025-10-06T07:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.019526 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.019606 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.019632 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.019669 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.019697 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:31Z","lastTransitionTime":"2025-10-06T07:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.122306 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.122356 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.122371 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.122389 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.122405 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:31Z","lastTransitionTime":"2025-10-06T07:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.165305 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:31 crc kubenswrapper[4769]: E1006 07:18:31.165462 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.165322 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.165300 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:31 crc kubenswrapper[4769]: E1006 07:18:31.165533 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:31 crc kubenswrapper[4769]: E1006 07:18:31.165875 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.225036 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.225107 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.225122 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.225138 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.225146 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:31Z","lastTransitionTime":"2025-10-06T07:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.328609 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.328684 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.328702 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.328744 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.328756 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:31Z","lastTransitionTime":"2025-10-06T07:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.431686 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.431804 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.431830 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.431869 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.431897 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:31Z","lastTransitionTime":"2025-10-06T07:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.535524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.535565 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.535573 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.535593 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.535602 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:31Z","lastTransitionTime":"2025-10-06T07:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.638701 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.638790 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.638815 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.638889 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.638916 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:31Z","lastTransitionTime":"2025-10-06T07:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.742650 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.742761 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.742790 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.742828 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.742855 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:31Z","lastTransitionTime":"2025-10-06T07:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.846088 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.846175 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.846197 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.846227 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.846249 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:31Z","lastTransitionTime":"2025-10-06T07:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.949360 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.949484 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.949499 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.949524 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:31 crc kubenswrapper[4769]: I1006 07:18:31.949538 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:31Z","lastTransitionTime":"2025-10-06T07:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.053059 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.053130 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.053148 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.053173 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.053190 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:32Z","lastTransitionTime":"2025-10-06T07:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.156177 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.156209 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.156218 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.156232 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.156242 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:32Z","lastTransitionTime":"2025-10-06T07:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.165857 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:32 crc kubenswrapper[4769]: E1006 07:18:32.166065 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.258561 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.258629 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.258649 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.258674 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.258690 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:32Z","lastTransitionTime":"2025-10-06T07:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.361042 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.361082 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.361093 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.361112 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.361125 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:32Z","lastTransitionTime":"2025-10-06T07:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.464739 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.464825 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.464850 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.464881 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.464905 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:32Z","lastTransitionTime":"2025-10-06T07:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.568156 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.568230 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.568255 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.568286 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.568309 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:32Z","lastTransitionTime":"2025-10-06T07:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.671205 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.671271 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.671288 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.671319 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.671341 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:32Z","lastTransitionTime":"2025-10-06T07:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.775078 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.775146 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.775169 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.775196 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.775220 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:32Z","lastTransitionTime":"2025-10-06T07:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.877627 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.877690 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.877707 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.877728 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.877741 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:32Z","lastTransitionTime":"2025-10-06T07:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.981767 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.981845 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.981857 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.981871 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:32 crc kubenswrapper[4769]: I1006 07:18:32.981880 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:32Z","lastTransitionTime":"2025-10-06T07:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.085453 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.085484 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.085495 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.085553 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.085563 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:33Z","lastTransitionTime":"2025-10-06T07:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.165909 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.165950 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:33 crc kubenswrapper[4769]: E1006 07:18:33.166024 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.165910 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:33 crc kubenswrapper[4769]: E1006 07:18:33.166393 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:33 crc kubenswrapper[4769]: E1006 07:18:33.166653 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.188289 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.188321 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.188332 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.188346 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.188355 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:33Z","lastTransitionTime":"2025-10-06T07:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.291173 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.291234 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.291256 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.291286 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.291306 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:33Z","lastTransitionTime":"2025-10-06T07:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.394218 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.394262 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.394271 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.394283 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.394293 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:33Z","lastTransitionTime":"2025-10-06T07:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.416703 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.416772 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.416789 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.416813 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.416830 4769 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-06T07:18:33Z","lastTransitionTime":"2025-10-06T07:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.478691 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd"] Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.479181 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.481473 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.481681 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.482648 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.482659 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.514312 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=34.514288037 podStartE2EDuration="34.514288037s" podCreationTimestamp="2025-10-06 07:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:33.494744912 +0000 UTC m=+110.019026079" watchObservedRunningTime="2025-10-06 07:18:33.514288037 +0000 UTC m=+110.038569184" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.548458 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-882lj" podStartSLOduration=88.548408126 podStartE2EDuration="1m28.548408126s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:33.544775653 +0000 UTC m=+110.069056820" watchObservedRunningTime="2025-10-06 07:18:33.548408126 +0000 UTC m=+110.072689313" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.565600 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=87.565582114 podStartE2EDuration="1m27.565582114s" podCreationTimestamp="2025-10-06 07:17:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:33.564874234 +0000 UTC m=+110.089155391" watchObservedRunningTime="2025-10-06 07:18:33.565582114 +0000 UTC m=+110.089863261" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.571031 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afc30ccc-8171-4561-bcb9-e5e0b661d10a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.571115 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afc30ccc-8171-4561-bcb9-e5e0b661d10a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.571198 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/afc30ccc-8171-4561-bcb9-e5e0b661d10a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.571237 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/afc30ccc-8171-4561-bcb9-e5e0b661d10a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.571276 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afc30ccc-8171-4561-bcb9-e5e0b661d10a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.576875 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-s8l5j" podStartSLOduration=89.576857115 podStartE2EDuration="1m29.576857115s" podCreationTimestamp="2025-10-06 07:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:33.575682311 +0000 UTC m=+110.099963458" watchObservedRunningTime="2025-10-06 07:18:33.576857115 +0000 UTC m=+110.101138302" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.596148 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bq98f" podStartSLOduration=89.596120661 podStartE2EDuration="1m29.596120661s" podCreationTimestamp="2025-10-06 07:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:33.595586126 +0000 UTC m=+110.119867283" watchObservedRunningTime="2025-10-06 07:18:33.596120661 +0000 UTC m=+110.120401848" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.623897 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-b48h2" podStartSLOduration=88.623874449 podStartE2EDuration="1m28.623874449s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:33.608240246 +0000 UTC m=+110.132521473" watchObservedRunningTime="2025-10-06 07:18:33.623874449 +0000 UTC m=+110.148155636" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.672844 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afc30ccc-8171-4561-bcb9-e5e0b661d10a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.672926 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/afc30ccc-8171-4561-bcb9-e5e0b661d10a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.672962 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/afc30ccc-8171-4561-bcb9-e5e0b661d10a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.672998 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afc30ccc-8171-4561-bcb9-e5e0b661d10a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.673101 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afc30ccc-8171-4561-bcb9-e5e0b661d10a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.674250 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/afc30ccc-8171-4561-bcb9-e5e0b661d10a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.675055 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/afc30ccc-8171-4561-bcb9-e5e0b661d10a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.675459 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/afc30ccc-8171-4561-bcb9-e5e0b661d10a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.681862 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afc30ccc-8171-4561-bcb9-e5e0b661d10a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.704275 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afc30ccc-8171-4561-bcb9-e5e0b661d10a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-st9xd\" (UID: \"afc30ccc-8171-4561-bcb9-e5e0b661d10a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.710957 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-cjjvp" podStartSLOduration=89.710942882 podStartE2EDuration="1m29.710942882s" podCreationTimestamp="2025-10-06 07:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:33.710237072 +0000 UTC m=+110.234518219" watchObservedRunningTime="2025-10-06 07:18:33.710942882 +0000 UTC m=+110.235224029" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.751017 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.75100307 podStartE2EDuration="8.75100307s" podCreationTimestamp="2025-10-06 07:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:33.750611329 +0000 UTC m=+110.274892486" watchObservedRunningTime="2025-10-06 07:18:33.75100307 +0000 UTC m=+110.275284217" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.780452 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.780414615 podStartE2EDuration="1m29.780414615s" podCreationTimestamp="2025-10-06 07:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:33.767285202 +0000 UTC m=+110.291566349" watchObservedRunningTime="2025-10-06 07:18:33.780414615 +0000 UTC m=+110.304695752" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.793731 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podStartSLOduration=89.793717613 podStartE2EDuration="1m29.793717613s" podCreationTimestamp="2025-10-06 07:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:33.793159987 +0000 UTC m=+110.317441134" watchObservedRunningTime="2025-10-06 07:18:33.793717613 +0000 UTC m=+110.317998760" Oct 06 07:18:33 crc kubenswrapper[4769]: I1006 07:18:33.795483 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" Oct 06 07:18:34 crc kubenswrapper[4769]: I1006 07:18:34.165758 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:34 crc kubenswrapper[4769]: E1006 07:18:34.166742 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:34 crc kubenswrapper[4769]: I1006 07:18:34.697150 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" event={"ID":"afc30ccc-8171-4561-bcb9-e5e0b661d10a","Type":"ContainerStarted","Data":"4cd15d44a6d684813333c1e12a38cacf7550340ad04fc1a842f1af8a685087cb"} Oct 06 07:18:34 crc kubenswrapper[4769]: I1006 07:18:34.697454 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" event={"ID":"afc30ccc-8171-4561-bcb9-e5e0b661d10a","Type":"ContainerStarted","Data":"5a89d7cf576844d95aab27bbff132f0c7b5b6aade1d393fefaf4204b784571a5"} Oct 06 07:18:34 crc kubenswrapper[4769]: I1006 07:18:34.711092 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.711066235 podStartE2EDuration="59.711066235s" podCreationTimestamp="2025-10-06 07:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:33.810720436 +0000 UTC m=+110.335001583" watchObservedRunningTime="2025-10-06 07:18:34.711066235 +0000 UTC m=+111.235347422" Oct 06 07:18:35 crc kubenswrapper[4769]: I1006 07:18:35.165068 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:35 crc kubenswrapper[4769]: I1006 07:18:35.165861 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:35 crc kubenswrapper[4769]: I1006 07:18:35.166053 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:35 crc kubenswrapper[4769]: E1006 07:18:35.166182 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:35 crc kubenswrapper[4769]: E1006 07:18:35.166323 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:35 crc kubenswrapper[4769]: E1006 07:18:35.166383 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:36 crc kubenswrapper[4769]: I1006 07:18:36.165460 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:36 crc kubenswrapper[4769]: E1006 07:18:36.165583 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:37 crc kubenswrapper[4769]: I1006 07:18:37.164970 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:37 crc kubenswrapper[4769]: I1006 07:18:37.164990 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:37 crc kubenswrapper[4769]: E1006 07:18:37.165102 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:37 crc kubenswrapper[4769]: E1006 07:18:37.165190 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:37 crc kubenswrapper[4769]: I1006 07:18:37.164999 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:37 crc kubenswrapper[4769]: E1006 07:18:37.165574 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:38 crc kubenswrapper[4769]: I1006 07:18:38.165529 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:38 crc kubenswrapper[4769]: E1006 07:18:38.165802 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:39 crc kubenswrapper[4769]: I1006 07:18:39.165066 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:39 crc kubenswrapper[4769]: I1006 07:18:39.165123 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:39 crc kubenswrapper[4769]: I1006 07:18:39.165068 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:39 crc kubenswrapper[4769]: E1006 07:18:39.165264 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:39 crc kubenswrapper[4769]: E1006 07:18:39.165517 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:39 crc kubenswrapper[4769]: E1006 07:18:39.165651 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:40 crc kubenswrapper[4769]: I1006 07:18:40.165270 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:40 crc kubenswrapper[4769]: E1006 07:18:40.165393 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:40 crc kubenswrapper[4769]: I1006 07:18:40.712960 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjjvp_3b98abd5-990e-494c-a2a5-526fae1bd5ec/kube-multus/1.log" Oct 06 07:18:40 crc kubenswrapper[4769]: I1006 07:18:40.713915 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjjvp_3b98abd5-990e-494c-a2a5-526fae1bd5ec/kube-multus/0.log" Oct 06 07:18:40 crc kubenswrapper[4769]: I1006 07:18:40.714031 4769 generic.go:334] "Generic (PLEG): container finished" podID="3b98abd5-990e-494c-a2a5-526fae1bd5ec" containerID="94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b" exitCode=1 Oct 06 07:18:40 crc kubenswrapper[4769]: I1006 07:18:40.714149 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjjvp" event={"ID":"3b98abd5-990e-494c-a2a5-526fae1bd5ec","Type":"ContainerDied","Data":"94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b"} Oct 06 07:18:40 crc kubenswrapper[4769]: I1006 07:18:40.714227 4769 scope.go:117] "RemoveContainer" containerID="7a619dc1a987f01ac0ebf2fa03fff6d80e4346545305a8937954e41a5b7799a0" Oct 06 07:18:40 crc kubenswrapper[4769]: I1006 07:18:40.714589 4769 scope.go:117] "RemoveContainer" containerID="94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b" Oct 06 07:18:40 crc kubenswrapper[4769]: E1006 07:18:40.714765 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-cjjvp_openshift-multus(3b98abd5-990e-494c-a2a5-526fae1bd5ec)\"" pod="openshift-multus/multus-cjjvp" podUID="3b98abd5-990e-494c-a2a5-526fae1bd5ec" Oct 06 07:18:40 crc kubenswrapper[4769]: I1006 07:18:40.734523 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-st9xd" podStartSLOduration=96.73450707800001 podStartE2EDuration="1m36.734507078s" podCreationTimestamp="2025-10-06 07:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:34.711906839 +0000 UTC m=+111.236187986" watchObservedRunningTime="2025-10-06 07:18:40.734507078 +0000 UTC m=+117.258788225" Oct 06 07:18:41 crc kubenswrapper[4769]: I1006 07:18:41.165743 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:41 crc kubenswrapper[4769]: I1006 07:18:41.165847 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:41 crc kubenswrapper[4769]: I1006 07:18:41.166497 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:41 crc kubenswrapper[4769]: E1006 07:18:41.166664 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:41 crc kubenswrapper[4769]: E1006 07:18:41.166784 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:41 crc kubenswrapper[4769]: E1006 07:18:41.166927 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:41 crc kubenswrapper[4769]: I1006 07:18:41.167240 4769 scope.go:117] "RemoveContainer" containerID="f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629" Oct 06 07:18:41 crc kubenswrapper[4769]: I1006 07:18:41.718545 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjjvp_3b98abd5-990e-494c-a2a5-526fae1bd5ec/kube-multus/1.log" Oct 06 07:18:41 crc kubenswrapper[4769]: I1006 07:18:41.720693 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/3.log" Oct 06 07:18:41 crc kubenswrapper[4769]: I1006 07:18:41.723176 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerStarted","Data":"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65"} Oct 06 07:18:41 crc kubenswrapper[4769]: I1006 07:18:41.723749 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:18:41 crc kubenswrapper[4769]: I1006 07:18:41.949616 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podStartSLOduration=96.949594834 podStartE2EDuration="1m36.949594834s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:18:41.753240918 +0000 UTC m=+118.277522065" watchObservedRunningTime="2025-10-06 07:18:41.949594834 +0000 UTC m=+118.473875991" Oct 06 07:18:41 crc kubenswrapper[4769]: I1006 07:18:41.950500 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wxwxs"] Oct 06 07:18:41 crc kubenswrapper[4769]: I1006 07:18:41.950598 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:41 crc kubenswrapper[4769]: E1006 07:18:41.950715 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:43 crc kubenswrapper[4769]: I1006 07:18:43.165762 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:43 crc kubenswrapper[4769]: I1006 07:18:43.165824 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:43 crc kubenswrapper[4769]: I1006 07:18:43.165843 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:43 crc kubenswrapper[4769]: I1006 07:18:43.165931 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:43 crc kubenswrapper[4769]: E1006 07:18:43.166103 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:43 crc kubenswrapper[4769]: E1006 07:18:43.166289 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:43 crc kubenswrapper[4769]: E1006 07:18:43.166490 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:43 crc kubenswrapper[4769]: E1006 07:18:43.166614 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:44 crc kubenswrapper[4769]: E1006 07:18:44.192539 4769 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 06 07:18:44 crc kubenswrapper[4769]: E1006 07:18:44.247887 4769 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 06 07:18:45 crc kubenswrapper[4769]: I1006 07:18:45.165922 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:45 crc kubenswrapper[4769]: I1006 07:18:45.165959 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:45 crc kubenswrapper[4769]: I1006 07:18:45.165986 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:45 crc kubenswrapper[4769]: I1006 07:18:45.165922 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:45 crc kubenswrapper[4769]: E1006 07:18:45.166060 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:45 crc kubenswrapper[4769]: E1006 07:18:45.166276 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:45 crc kubenswrapper[4769]: E1006 07:18:45.166465 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:45 crc kubenswrapper[4769]: E1006 07:18:45.166614 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:47 crc kubenswrapper[4769]: I1006 07:18:47.165338 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:47 crc kubenswrapper[4769]: I1006 07:18:47.165704 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:47 crc kubenswrapper[4769]: I1006 07:18:47.165707 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:47 crc kubenswrapper[4769]: E1006 07:18:47.165847 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:47 crc kubenswrapper[4769]: I1006 07:18:47.165877 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:47 crc kubenswrapper[4769]: E1006 07:18:47.165943 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:47 crc kubenswrapper[4769]: E1006 07:18:47.166097 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:47 crc kubenswrapper[4769]: E1006 07:18:47.166263 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:49 crc kubenswrapper[4769]: I1006 07:18:49.165023 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:49 crc kubenswrapper[4769]: I1006 07:18:49.164999 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:49 crc kubenswrapper[4769]: E1006 07:18:49.165215 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:49 crc kubenswrapper[4769]: I1006 07:18:49.165053 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:49 crc kubenswrapper[4769]: I1006 07:18:49.165256 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:49 crc kubenswrapper[4769]: E1006 07:18:49.165352 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:49 crc kubenswrapper[4769]: E1006 07:18:49.165498 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:49 crc kubenswrapper[4769]: E1006 07:18:49.165560 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:49 crc kubenswrapper[4769]: E1006 07:18:49.249443 4769 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 06 07:18:51 crc kubenswrapper[4769]: I1006 07:18:51.164966 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:51 crc kubenswrapper[4769]: I1006 07:18:51.165013 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:51 crc kubenswrapper[4769]: E1006 07:18:51.165124 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:51 crc kubenswrapper[4769]: I1006 07:18:51.164972 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:51 crc kubenswrapper[4769]: E1006 07:18:51.165284 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:51 crc kubenswrapper[4769]: I1006 07:18:51.165335 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:51 crc kubenswrapper[4769]: E1006 07:18:51.165400 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:51 crc kubenswrapper[4769]: E1006 07:18:51.165485 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:52 crc kubenswrapper[4769]: I1006 07:18:52.166221 4769 scope.go:117] "RemoveContainer" containerID="94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b" Oct 06 07:18:52 crc kubenswrapper[4769]: I1006 07:18:52.759781 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjjvp_3b98abd5-990e-494c-a2a5-526fae1bd5ec/kube-multus/1.log" Oct 06 07:18:52 crc kubenswrapper[4769]: I1006 07:18:52.760224 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjjvp" event={"ID":"3b98abd5-990e-494c-a2a5-526fae1bd5ec","Type":"ContainerStarted","Data":"7040c00045ef3e10db0e433c802a4cfadffb3fb10f446ba9b7843be295d98ac3"} Oct 06 07:18:53 crc kubenswrapper[4769]: I1006 07:18:53.165139 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:53 crc kubenswrapper[4769]: I1006 07:18:53.165201 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:53 crc kubenswrapper[4769]: E1006 07:18:53.165292 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wxwxs" podUID="cbddd0e8-9d17-4278-acdc-e35d2d8d70f9" Oct 06 07:18:53 crc kubenswrapper[4769]: I1006 07:18:53.165157 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:53 crc kubenswrapper[4769]: E1006 07:18:53.165363 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 06 07:18:53 crc kubenswrapper[4769]: I1006 07:18:53.165222 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:53 crc kubenswrapper[4769]: E1006 07:18:53.165544 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 06 07:18:53 crc kubenswrapper[4769]: E1006 07:18:53.165724 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 06 07:18:55 crc kubenswrapper[4769]: I1006 07:18:55.165351 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:18:55 crc kubenswrapper[4769]: I1006 07:18:55.165401 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:18:55 crc kubenswrapper[4769]: I1006 07:18:55.165462 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:18:55 crc kubenswrapper[4769]: I1006 07:18:55.165553 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:18:55 crc kubenswrapper[4769]: I1006 07:18:55.168328 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 06 07:18:55 crc kubenswrapper[4769]: I1006 07:18:55.168344 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 06 07:18:55 crc kubenswrapper[4769]: I1006 07:18:55.168410 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 06 07:18:55 crc kubenswrapper[4769]: I1006 07:18:55.168511 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 06 07:18:55 crc kubenswrapper[4769]: I1006 07:18:55.168844 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 06 07:18:55 crc kubenswrapper[4769]: I1006 07:18:55.168892 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 06 07:19:03 crc kubenswrapper[4769]: I1006 07:19:03.730843 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.056348 4769 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.111638 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.112167 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.115125 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.115743 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.115947 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ftkq8"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.116805 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.117037 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.117252 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.117455 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.120738 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.121017 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.148770 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.149115 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.156251 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.166081 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.166354 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.166710 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.166864 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.166903 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.166889 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.167353 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.167837 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.169931 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bc478"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.170291 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.170810 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.170864 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.183785 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.196280 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.196849 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.199454 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/617e0ca7-0321-4c2b-ae31-914f5088b976-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pkqrr\" (UID: \"617e0ca7-0321-4c2b-ae31-914f5088b976\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.199756 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a77a2ffb-9393-4cd9-9162-50678c269f57-serving-cert\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.199807 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aaffef3-8bf8-4efc-8800-2db670031b2e-config\") pod \"route-controller-manager-6576b87f9c-rsc4m\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.199852 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73e62be3-b28a-4ff2-846c-8434ade8d012-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.199882 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-config\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.197190 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.199919 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvjvq\" (UniqueName: \"kubernetes.io/projected/617e0ca7-0321-4c2b-ae31-914f5088b976-kube-api-access-xvjvq\") pod \"openshift-apiserver-operator-796bbdcf4f-pkqrr\" (UID: \"617e0ca7-0321-4c2b-ae31-914f5088b976\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.199973 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa049f4f-024f-4f36-a7e6-9583b1501649-auth-proxy-config\") pod \"machine-approver-56656f9798-2fc25\" (UID: \"fa049f4f-024f-4f36-a7e6-9583b1501649\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.198236 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200031 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e62be3-b28a-4ff2-846c-8434ade8d012-config\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200063 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e62be3-b28a-4ff2-846c-8434ade8d012-serving-cert\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.198294 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200092 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fa049f4f-024f-4f36-a7e6-9583b1501649-machine-approver-tls\") pod \"machine-approver-56656f9798-2fc25\" (UID: \"fa049f4f-024f-4f36-a7e6-9583b1501649\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200119 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa049f4f-024f-4f36-a7e6-9583b1501649-config\") pod \"machine-approver-56656f9798-2fc25\" (UID: \"fa049f4f-024f-4f36-a7e6-9583b1501649\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200161 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzd9\" (UniqueName: \"kubernetes.io/projected/fa049f4f-024f-4f36-a7e6-9583b1501649-kube-api-access-ttzd9\") pod \"machine-approver-56656f9798-2fc25\" (UID: \"fa049f4f-024f-4f36-a7e6-9583b1501649\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200212 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzsp\" (UniqueName: \"kubernetes.io/projected/73e62be3-b28a-4ff2-846c-8434ade8d012-kube-api-access-hmzsp\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200240 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617e0ca7-0321-4c2b-ae31-914f5088b976-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pkqrr\" (UID: \"617e0ca7-0321-4c2b-ae31-914f5088b976\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200282 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200307 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aaffef3-8bf8-4efc-8800-2db670031b2e-client-ca\") pod \"route-controller-manager-6576b87f9c-rsc4m\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200340 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvldr\" (UniqueName: \"kubernetes.io/projected/0aaffef3-8bf8-4efc-8800-2db670031b2e-kube-api-access-zvldr\") pod \"route-controller-manager-6576b87f9c-rsc4m\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200369 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-client-ca\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200500 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-657fz\" (UniqueName: \"kubernetes.io/projected/a77a2ffb-9393-4cd9-9162-50678c269f57-kube-api-access-657fz\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200531 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aaffef3-8bf8-4efc-8800-2db670031b2e-serving-cert\") pod \"route-controller-manager-6576b87f9c-rsc4m\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.200567 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73e62be3-b28a-4ff2-846c-8434ade8d012-service-ca-bundle\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.201313 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.209741 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fgcjc"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.210180 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.211123 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.211360 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.211519 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.215905 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.215943 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.216069 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.216467 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.216521 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.216642 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.216790 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.216830 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.217015 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.217025 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.217173 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.217201 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.217298 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.217376 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.218065 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.218569 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.219061 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.222923 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8f7sk"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.223575 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.224027 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.224270 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.242721 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.243111 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.243316 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.243523 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.243744 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.243946 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.244119 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.244396 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.245521 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.245842 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.246580 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.247404 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.247872 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.249364 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.249665 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.249861 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.250023 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.256482 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.257936 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.258100 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.258256 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.258682 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.258930 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.260037 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.260204 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.260494 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.261070 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.261686 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-lsg5p"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.262172 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.263377 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.263889 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.269809 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s2fwl"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.270965 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.276658 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jvgm4"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.277372 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.277798 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.278141 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jvgm4" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.280083 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-px9vm"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.281066 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-l5vs7"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.282624 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.281344 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.281782 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.283673 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.283793 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.283879 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.283952 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.284021 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.285652 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.285819 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.287097 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kn4s5"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.287207 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l5vs7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.289843 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.292208 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.292567 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.293151 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cnwm9"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.294768 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.299409 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h7lhw"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.300316 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fgcjc"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.300408 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.300768 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.300847 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.300906 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.301477 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.301698 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.300474 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.301999 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.301933 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cnwm9" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.301565 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.301607 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.304035 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.304742 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.304775 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aaffef3-8bf8-4efc-8800-2db670031b2e-client-ca\") pod \"route-controller-manager-6576b87f9c-rsc4m\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.304797 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.304802 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvldr\" (UniqueName: \"kubernetes.io/projected/0aaffef3-8bf8-4efc-8800-2db670031b2e-kube-api-access-zvldr\") pod \"route-controller-manager-6576b87f9c-rsc4m\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.304824 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-client-ca\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.305182 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-657fz\" (UniqueName: \"kubernetes.io/projected/a77a2ffb-9393-4cd9-9162-50678c269f57-kube-api-access-657fz\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.305363 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aaffef3-8bf8-4efc-8800-2db670031b2e-serving-cert\") pod \"route-controller-manager-6576b87f9c-rsc4m\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.306455 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-client-ca\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.306467 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.306851 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73e62be3-b28a-4ff2-846c-8434ade8d012-service-ca-bundle\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.306891 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/617e0ca7-0321-4c2b-ae31-914f5088b976-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pkqrr\" (UID: \"617e0ca7-0321-4c2b-ae31-914f5088b976\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.306914 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a77a2ffb-9393-4cd9-9162-50678c269f57-serving-cert\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.306941 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aaffef3-8bf8-4efc-8800-2db670031b2e-config\") pod \"route-controller-manager-6576b87f9c-rsc4m\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.306968 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73e62be3-b28a-4ff2-846c-8434ade8d012-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.306989 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-config\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.307011 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvjvq\" (UniqueName: \"kubernetes.io/projected/617e0ca7-0321-4c2b-ae31-914f5088b976-kube-api-access-xvjvq\") pod \"openshift-apiserver-operator-796bbdcf4f-pkqrr\" (UID: \"617e0ca7-0321-4c2b-ae31-914f5088b976\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.307031 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa049f4f-024f-4f36-a7e6-9583b1501649-auth-proxy-config\") pod \"machine-approver-56656f9798-2fc25\" (UID: \"fa049f4f-024f-4f36-a7e6-9583b1501649\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.307092 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e62be3-b28a-4ff2-846c-8434ade8d012-config\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.307117 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e62be3-b28a-4ff2-846c-8434ade8d012-serving-cert\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.307137 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fa049f4f-024f-4f36-a7e6-9583b1501649-machine-approver-tls\") pod \"machine-approver-56656f9798-2fc25\" (UID: \"fa049f4f-024f-4f36-a7e6-9583b1501649\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.307161 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa049f4f-024f-4f36-a7e6-9583b1501649-config\") pod \"machine-approver-56656f9798-2fc25\" (UID: \"fa049f4f-024f-4f36-a7e6-9583b1501649\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.307190 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzd9\" (UniqueName: \"kubernetes.io/projected/fa049f4f-024f-4f36-a7e6-9583b1501649-kube-api-access-ttzd9\") pod \"machine-approver-56656f9798-2fc25\" (UID: \"fa049f4f-024f-4f36-a7e6-9583b1501649\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.307222 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmzsp\" (UniqueName: \"kubernetes.io/projected/73e62be3-b28a-4ff2-846c-8434ade8d012-kube-api-access-hmzsp\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.307248 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617e0ca7-0321-4c2b-ae31-914f5088b976-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pkqrr\" (UID: \"617e0ca7-0321-4c2b-ae31-914f5088b976\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.307814 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617e0ca7-0321-4c2b-ae31-914f5088b976-config\") pod \"openshift-apiserver-operator-796bbdcf4f-pkqrr\" (UID: \"617e0ca7-0321-4c2b-ae31-914f5088b976\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.308310 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa049f4f-024f-4f36-a7e6-9583b1501649-auth-proxy-config\") pod \"machine-approver-56656f9798-2fc25\" (UID: \"fa049f4f-024f-4f36-a7e6-9583b1501649\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.308750 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73e62be3-b28a-4ff2-846c-8434ade8d012-config\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.309209 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa049f4f-024f-4f36-a7e6-9583b1501649-config\") pod \"machine-approver-56656f9798-2fc25\" (UID: \"fa049f4f-024f-4f36-a7e6-9583b1501649\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.310332 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73e62be3-b28a-4ff2-846c-8434ade8d012-service-ca-bundle\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.310464 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.310389 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aaffef3-8bf8-4efc-8800-2db670031b2e-client-ca\") pod \"route-controller-manager-6576b87f9c-rsc4m\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.310961 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.311197 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.311282 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.311405 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.311563 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.311646 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.311837 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.327493 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-config\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.327675 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aaffef3-8bf8-4efc-8800-2db670031b2e-config\") pod \"route-controller-manager-6576b87f9c-rsc4m\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.327833 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73e62be3-b28a-4ff2-846c-8434ade8d012-serving-cert\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.327896 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.328149 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.328254 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.330143 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aaffef3-8bf8-4efc-8800-2db670031b2e-serving-cert\") pod \"route-controller-manager-6576b87f9c-rsc4m\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.330627 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.332102 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73e62be3-b28a-4ff2-846c-8434ade8d012-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.332846 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.333285 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.333799 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.335275 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-xnztd"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.335885 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.336070 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.336584 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.338502 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.339302 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.340183 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fa049f4f-024f-4f36-a7e6-9583b1501649-machine-approver-tls\") pod \"machine-approver-56656f9798-2fc25\" (UID: \"fa049f4f-024f-4f36-a7e6-9583b1501649\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.341654 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.342119 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.350121 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a77a2ffb-9393-4cd9-9162-50678c269f57-serving-cert\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.350562 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.352577 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/617e0ca7-0321-4c2b-ae31-914f5088b976-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-pkqrr\" (UID: \"617e0ca7-0321-4c2b-ae31-914f5088b976\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.388272 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.389094 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.389233 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.389310 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.389499 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.389626 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.390032 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.391056 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.391260 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k666r"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.391686 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.392434 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.391831 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.392834 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.392973 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.393559 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.393597 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rp2mj"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.394121 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bc478"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.394149 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f9jjc"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.394486 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.394809 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rp2mj" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.395074 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.395604 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.395844 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.396026 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.396296 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.397153 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.397477 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ftkq8"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.398372 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.399036 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.399487 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.400115 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.400173 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.400832 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.402810 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.403546 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.404540 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.405032 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.406277 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.407504 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jm2xh"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.407931 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.408171 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70104bc8-4b31-4151-8aae-ceae236c2359-config\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.408271 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.408349 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6ae74f0-a980-4a1e-9d57-8c2a879f53ef-serving-cert\") pod \"console-operator-58897d9998-s2fwl\" (UID: \"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef\") " pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.408543 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-node-pullsecrets\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.408849 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6ae74f0-a980-4a1e-9d57-8c2a879f53ef-config\") pod \"console-operator-58897d9998-s2fwl\" (UID: \"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef\") " pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.408931 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76308f77-d787-4795-91f3-15c13227c3e7-config\") pod \"kube-controller-manager-operator-78b949d7b-jp7kb\" (UID: \"76308f77-d787-4795-91f3-15c13227c3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.409099 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cstzc"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.414702 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-serving-cert\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.414998 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.415078 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9428\" (UniqueName: \"kubernetes.io/projected/91429be1-1ad0-4685-af4a-2184431a1d9f-kube-api-access-t9428\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.415227 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pw4b\" (UniqueName: \"kubernetes.io/projected/8045bc4d-cc45-45a6-a2fe-85107461350c-kube-api-access-8pw4b\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjdr6\" (UID: \"8045bc4d-cc45-45a6-a2fe-85107461350c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.415310 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/70104bc8-4b31-4151-8aae-ceae236c2359-etcd-service-ca\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.415381 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.415415 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cstzc" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.415665 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.415340 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.416202 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.416224 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8vq2f"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.415678 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b-metrics-tls\") pod \"ingress-operator-5b745b69d9-n8nmg\" (UID: \"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.416516 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-oauth-serving-cert\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.416614 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7hzv\" (UniqueName: \"kubernetes.io/projected/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-kube-api-access-l7hzv\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.416715 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b-trusted-ca\") pod \"ingress-operator-5b745b69d9-n8nmg\" (UID: \"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.416811 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-service-ca\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.416931 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91429be1-1ad0-4685-af4a-2184431a1d9f-audit-dir\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417023 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70104bc8-4b31-4151-8aae-ceae236c2359-etcd-client\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417101 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfl6b\" (UniqueName: \"kubernetes.io/projected/a6ae74f0-a980-4a1e-9d57-8c2a879f53ef-kube-api-access-xfl6b\") pod \"console-operator-58897d9998-s2fwl\" (UID: \"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef\") " pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417175 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-lsg5p"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.416858 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8vq2f" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417364 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417480 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-audit\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417596 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-audit-dir\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417673 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417757 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4fwm\" (UniqueName: \"kubernetes.io/projected/7e16e210-5266-45ae-9f3d-c214c5c173a4-kube-api-access-z4fwm\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417874 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-config\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417908 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-image-import-ca\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417930 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-audit-policies\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417950 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.417989 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418007 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-etcd-serving-ca\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418025 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c267793-2072-487d-8d1b-2e962921ceee-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-c5rcq\" (UID: \"6c267793-2072-487d-8d1b-2e962921ceee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418042 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-trusted-ca-bundle\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418070 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqg9m\" (UniqueName: \"kubernetes.io/projected/70104bc8-4b31-4151-8aae-ceae236c2359-kube-api-access-lqg9m\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418085 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-etcd-client\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418100 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-encryption-config\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418142 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76308f77-d787-4795-91f3-15c13227c3e7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jp7kb\" (UID: \"76308f77-d787-4795-91f3-15c13227c3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418160 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418178 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8045bc4d-cc45-45a6-a2fe-85107461350c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjdr6\" (UID: \"8045bc4d-cc45-45a6-a2fe-85107461350c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418196 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-config\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418215 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76308f77-d787-4795-91f3-15c13227c3e7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jp7kb\" (UID: \"76308f77-d787-4795-91f3-15c13227c3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418255 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-457g7\" (UID: \"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418275 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ae74f0-a980-4a1e-9d57-8c2a879f53ef-trusted-ca\") pod \"console-operator-58897d9998-s2fwl\" (UID: \"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef\") " pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418293 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-457g7\" (UID: \"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418313 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418332 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwxlt\" (UniqueName: \"kubernetes.io/projected/b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b-kube-api-access-xwxlt\") pod \"ingress-operator-5b745b69d9-n8nmg\" (UID: \"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418354 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418374 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418391 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/70104bc8-4b31-4151-8aae-ceae236c2359-etcd-ca\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418409 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-457g7\" (UID: \"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418446 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp47n\" (UniqueName: \"kubernetes.io/projected/6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe-kube-api-access-jp47n\") pod \"cluster-image-registry-operator-dc59b4c8b-457g7\" (UID: \"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418464 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n8nmg\" (UID: \"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418484 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70104bc8-4b31-4151-8aae-ceae236c2359-serving-cert\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418502 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnr6t\" (UniqueName: \"kubernetes.io/projected/c319d2c0-09fc-41ee-bef3-376730d981c8-kube-api-access-qnr6t\") pod \"migrator-59844c95c7-cnwm9\" (UID: \"c319d2c0-09fc-41ee-bef3-376730d981c8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cnwm9" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418542 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8045bc4d-cc45-45a6-a2fe-85107461350c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjdr6\" (UID: \"8045bc4d-cc45-45a6-a2fe-85107461350c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418559 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmvsq\" (UniqueName: \"kubernetes.io/projected/6c267793-2072-487d-8d1b-2e962921ceee-kube-api-access-xmvsq\") pod \"cluster-samples-operator-665b6dd947-c5rcq\" (UID: \"6c267793-2072-487d-8d1b-2e962921ceee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418575 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-oauth-config\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418591 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-serving-cert\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418926 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.418606 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.420332 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jvgm4"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.422759 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sdgpd"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.429474 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s2fwl"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.429507 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.429518 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8f7sk"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.429532 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.429542 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.429626 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.430334 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.431826 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.434607 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.441025 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.443357 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.449382 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l5vs7"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.451216 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.452728 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.454091 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.455271 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.456505 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k666r"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.457565 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h7lhw"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.458673 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-px9vm"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.459844 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kn4s5"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.460777 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-pfz99"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.462198 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rp2mj"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.462306 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.462872 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.464073 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f9jjc"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.465090 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.466162 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sdgpd"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.467218 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jm2xh"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.468231 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.469736 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cstzc"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.470480 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cnwm9"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.471508 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pfz99"] Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.473567 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.503975 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520076 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520131 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-etcd-serving-ca\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520176 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c267793-2072-487d-8d1b-2e962921ceee-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-c5rcq\" (UID: \"6c267793-2072-487d-8d1b-2e962921ceee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520199 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-trusted-ca-bundle\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520246 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqg9m\" (UniqueName: \"kubernetes.io/projected/70104bc8-4b31-4151-8aae-ceae236c2359-kube-api-access-lqg9m\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520275 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76308f77-d787-4795-91f3-15c13227c3e7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jp7kb\" (UID: \"76308f77-d787-4795-91f3-15c13227c3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520307 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-etcd-client\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520325 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-encryption-config\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520358 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-config\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520412 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520483 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8045bc4d-cc45-45a6-a2fe-85107461350c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjdr6\" (UID: \"8045bc4d-cc45-45a6-a2fe-85107461350c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520531 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76308f77-d787-4795-91f3-15c13227c3e7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jp7kb\" (UID: \"76308f77-d787-4795-91f3-15c13227c3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520557 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-457g7\" (UID: \"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520580 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ae74f0-a980-4a1e-9d57-8c2a879f53ef-trusted-ca\") pod \"console-operator-58897d9998-s2fwl\" (UID: \"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef\") " pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520614 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-457g7\" (UID: \"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520637 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520658 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwxlt\" (UniqueName: \"kubernetes.io/projected/b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b-kube-api-access-xwxlt\") pod \"ingress-operator-5b745b69d9-n8nmg\" (UID: \"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520679 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520695 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520721 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-457g7\" (UID: \"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520741 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520779 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/70104bc8-4b31-4151-8aae-ceae236c2359-etcd-ca\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520799 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n8nmg\" (UID: \"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520843 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp47n\" (UniqueName: \"kubernetes.io/projected/6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe-kube-api-access-jp47n\") pod \"cluster-image-registry-operator-dc59b4c8b-457g7\" (UID: \"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520868 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnr6t\" (UniqueName: \"kubernetes.io/projected/c319d2c0-09fc-41ee-bef3-376730d981c8-kube-api-access-qnr6t\") pod \"migrator-59844c95c7-cnwm9\" (UID: \"c319d2c0-09fc-41ee-bef3-376730d981c8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cnwm9" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520894 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8045bc4d-cc45-45a6-a2fe-85107461350c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjdr6\" (UID: \"8045bc4d-cc45-45a6-a2fe-85107461350c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520933 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmvsq\" (UniqueName: \"kubernetes.io/projected/6c267793-2072-487d-8d1b-2e962921ceee-kube-api-access-xmvsq\") pod \"cluster-samples-operator-665b6dd947-c5rcq\" (UID: \"6c267793-2072-487d-8d1b-2e962921ceee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520955 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-oauth-config\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.520973 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70104bc8-4b31-4151-8aae-ceae236c2359-serving-cert\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521014 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-serving-cert\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521036 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521121 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70104bc8-4b31-4151-8aae-ceae236c2359-config\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521171 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521190 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6ae74f0-a980-4a1e-9d57-8c2a879f53ef-serving-cert\") pod \"console-operator-58897d9998-s2fwl\" (UID: \"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef\") " pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521213 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76308f77-d787-4795-91f3-15c13227c3e7-config\") pod \"kube-controller-manager-operator-78b949d7b-jp7kb\" (UID: \"76308f77-d787-4795-91f3-15c13227c3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521247 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-node-pullsecrets\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521265 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6ae74f0-a980-4a1e-9d57-8c2a879f53ef-config\") pod \"console-operator-58897d9998-s2fwl\" (UID: \"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef\") " pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521283 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-serving-cert\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521320 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521337 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9428\" (UniqueName: \"kubernetes.io/projected/91429be1-1ad0-4685-af4a-2184431a1d9f-kube-api-access-t9428\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521364 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pw4b\" (UniqueName: \"kubernetes.io/projected/8045bc4d-cc45-45a6-a2fe-85107461350c-kube-api-access-8pw4b\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjdr6\" (UID: \"8045bc4d-cc45-45a6-a2fe-85107461350c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521402 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/70104bc8-4b31-4151-8aae-ceae236c2359-etcd-service-ca\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521441 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521460 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b-metrics-tls\") pod \"ingress-operator-5b745b69d9-n8nmg\" (UID: \"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521477 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-oauth-serving-cert\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521494 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7hzv\" (UniqueName: \"kubernetes.io/projected/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-kube-api-access-l7hzv\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521527 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b-trusted-ca\") pod \"ingress-operator-5b745b69d9-n8nmg\" (UID: \"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521545 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-service-ca\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521562 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91429be1-1ad0-4685-af4a-2184431a1d9f-audit-dir\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521594 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70104bc8-4b31-4151-8aae-ceae236c2359-etcd-client\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521616 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521636 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfl6b\" (UniqueName: \"kubernetes.io/projected/a6ae74f0-a980-4a1e-9d57-8c2a879f53ef-kube-api-access-xfl6b\") pod \"console-operator-58897d9998-s2fwl\" (UID: \"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef\") " pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521676 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-audit\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521693 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-audit-dir\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521712 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521731 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4fwm\" (UniqueName: \"kubernetes.io/projected/7e16e210-5266-45ae-9f3d-c214c5c173a4-kube-api-access-z4fwm\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521766 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-config\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521788 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-image-import-ca\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521812 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-audit-policies\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.521845 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.522191 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-etcd-serving-ca\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.522884 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.522977 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91429be1-1ad0-4685-af4a-2184431a1d9f-audit-dir\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.523540 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ae74f0-a980-4a1e-9d57-8c2a879f53ef-trusted-ca\") pod \"console-operator-58897d9998-s2fwl\" (UID: \"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef\") " pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.523793 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/70104bc8-4b31-4151-8aae-ceae236c2359-etcd-service-ca\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.523910 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-service-ca\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.525037 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-config\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.525297 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-image-import-ca\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.525803 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-audit-policies\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.526228 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-oauth-serving-cert\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.526538 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.526714 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-config\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.526892 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-audit\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.526917 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-audit-dir\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.526994 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6ae74f0-a980-4a1e-9d57-8c2a879f53ef-config\") pod \"console-operator-58897d9998-s2fwl\" (UID: \"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef\") " pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.554983 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-457g7\" (UID: \"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.555217 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-encryption-config\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.555757 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.556122 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.556172 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.556216 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70104bc8-4b31-4151-8aae-ceae236c2359-etcd-client\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.556381 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.556446 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-457g7\" (UID: \"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.556709 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70104bc8-4b31-4151-8aae-ceae236c2359-serving-cert\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.557084 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.557183 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.557254 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-node-pullsecrets\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.557309 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-serving-cert\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.557414 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.557441 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-oauth-config\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.557737 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.557798 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c267793-2072-487d-8d1b-2e962921ceee-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-c5rcq\" (UID: \"6c267793-2072-487d-8d1b-2e962921ceee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.557896 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-etcd-client\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.558234 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.558710 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-trusted-ca-bundle\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.559176 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70104bc8-4b31-4151-8aae-ceae236c2359-config\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.559327 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/70104bc8-4b31-4151-8aae-ceae236c2359-etcd-ca\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.559658 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.560091 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.561096 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8045bc4d-cc45-45a6-a2fe-85107461350c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjdr6\" (UID: \"8045bc4d-cc45-45a6-a2fe-85107461350c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.568116 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8045bc4d-cc45-45a6-a2fe-85107461350c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjdr6\" (UID: \"8045bc4d-cc45-45a6-a2fe-85107461350c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.568893 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6ae74f0-a980-4a1e-9d57-8c2a879f53ef-serving-cert\") pod \"console-operator-58897d9998-s2fwl\" (UID: \"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef\") " pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.569310 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-serving-cert\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.574355 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.583152 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.594251 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.614536 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.633865 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.638675 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b-metrics-tls\") pod \"ingress-operator-5b745b69d9-n8nmg\" (UID: \"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.661354 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.667925 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b-trusted-ca\") pod \"ingress-operator-5b745b69d9-n8nmg\" (UID: \"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.674689 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.694490 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.715485 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.734796 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.757784 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.774708 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.794217 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.815140 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.833954 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.855414 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.874383 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.877563 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76308f77-d787-4795-91f3-15c13227c3e7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jp7kb\" (UID: \"76308f77-d787-4795-91f3-15c13227c3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.894349 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.915560 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.918162 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76308f77-d787-4795-91f3-15c13227c3e7-config\") pod \"kube-controller-manager-operator-78b949d7b-jp7kb\" (UID: \"76308f77-d787-4795-91f3-15c13227c3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.935810 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.955012 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.974452 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 06 07:19:04 crc kubenswrapper[4769]: I1006 07:19:04.995647 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.036214 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-657fz\" (UniqueName: \"kubernetes.io/projected/a77a2ffb-9393-4cd9-9162-50678c269f57-kube-api-access-657fz\") pod \"controller-manager-879f6c89f-bc478\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.046951 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvldr\" (UniqueName: \"kubernetes.io/projected/0aaffef3-8bf8-4efc-8800-2db670031b2e-kube-api-access-zvldr\") pod \"route-controller-manager-6576b87f9c-rsc4m\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.089053 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvjvq\" (UniqueName: \"kubernetes.io/projected/617e0ca7-0321-4c2b-ae31-914f5088b976-kube-api-access-xvjvq\") pod \"openshift-apiserver-operator-796bbdcf4f-pkqrr\" (UID: \"617e0ca7-0321-4c2b-ae31-914f5088b976\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.090364 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.109895 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzd9\" (UniqueName: \"kubernetes.io/projected/fa049f4f-024f-4f36-a7e6-9583b1501649-kube-api-access-ttzd9\") pod \"machine-approver-56656f9798-2fc25\" (UID: \"fa049f4f-024f-4f36-a7e6-9583b1501649\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.133819 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmzsp\" (UniqueName: \"kubernetes.io/projected/73e62be3-b28a-4ff2-846c-8434ade8d012-kube-api-access-hmzsp\") pod \"authentication-operator-69f744f599-ftkq8\" (UID: \"73e62be3-b28a-4ff2-846c-8434ade8d012\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.135826 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.162042 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.175825 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.214907 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.236960 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.256079 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.276361 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.295045 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.316933 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.335181 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.335437 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.346116 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bc478"] Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.348113 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.354774 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.358470 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" Oct 06 07:19:05 crc kubenswrapper[4769]: W1006 07:19:05.369201 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa049f4f_024f_4f36_a7e6_9583b1501649.slice/crio-c0db6a0b8b334c4d3f40b163170be518a9f6713cf26f87a7f7a1f58c3cbfb420 WatchSource:0}: Error finding container c0db6a0b8b334c4d3f40b163170be518a9f6713cf26f87a7f7a1f58c3cbfb420: Status 404 returned error can't find the container with id c0db6a0b8b334c4d3f40b163170be518a9f6713cf26f87a7f7a1f58c3cbfb420 Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.374459 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.388223 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.396286 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.413663 4769 request.go:700] Waited for 1.019667452s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0 Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.416178 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.435410 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.464872 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.478827 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.497897 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.517364 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.535077 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.554343 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.574566 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.584816 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m"] Oct 06 07:19:05 crc kubenswrapper[4769]: W1006 07:19:05.592256 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aaffef3_8bf8_4efc_8800_2db670031b2e.slice/crio-82b1938541d36057784bcc180875d46f2b9ae2675cbf76090626c5c27f894014 WatchSource:0}: Error finding container 82b1938541d36057784bcc180875d46f2b9ae2675cbf76090626c5c27f894014: Status 404 returned error can't find the container with id 82b1938541d36057784bcc180875d46f2b9ae2675cbf76090626c5c27f894014 Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.594745 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.617208 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.624673 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr"] Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.635559 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 06 07:19:05 crc kubenswrapper[4769]: W1006 07:19:05.639655 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod617e0ca7_0321_4c2b_ae31_914f5088b976.slice/crio-80797a3bf5c79454073432208404258c889011688baa3203f7a81c642c513399 WatchSource:0}: Error finding container 80797a3bf5c79454073432208404258c889011688baa3203f7a81c642c513399: Status 404 returned error can't find the container with id 80797a3bf5c79454073432208404258c889011688baa3203f7a81c642c513399 Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.654334 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.674714 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.696862 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.715109 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.734767 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.753859 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.775207 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.788339 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ftkq8"] Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.794636 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 06 07:19:05 crc kubenswrapper[4769]: W1006 07:19:05.801962 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73e62be3_b28a_4ff2_846c_8434ade8d012.slice/crio-69bfd0df78aced3a23be7d3004ee4e2a28ece513fea625ffdec1e93753719aff WatchSource:0}: Error finding container 69bfd0df78aced3a23be7d3004ee4e2a28ece513fea625ffdec1e93753719aff: Status 404 returned error can't find the container with id 69bfd0df78aced3a23be7d3004ee4e2a28ece513fea625ffdec1e93753719aff Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.812835 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" event={"ID":"73e62be3-b28a-4ff2-846c-8434ade8d012","Type":"ContainerStarted","Data":"69bfd0df78aced3a23be7d3004ee4e2a28ece513fea625ffdec1e93753719aff"} Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.813936 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.814950 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" event={"ID":"0aaffef3-8bf8-4efc-8800-2db670031b2e","Type":"ContainerStarted","Data":"ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5"} Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.815005 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" event={"ID":"0aaffef3-8bf8-4efc-8800-2db670031b2e","Type":"ContainerStarted","Data":"82b1938541d36057784bcc180875d46f2b9ae2675cbf76090626c5c27f894014"} Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.815151 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.816773 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" event={"ID":"fa049f4f-024f-4f36-a7e6-9583b1501649","Type":"ContainerStarted","Data":"739954e5cd67130e8c007874b40d5aeb7d7151c6d080388a52e00f0d6ebf388d"} Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.816784 4769 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-rsc4m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.816801 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" event={"ID":"fa049f4f-024f-4f36-a7e6-9583b1501649","Type":"ContainerStarted","Data":"c0db6a0b8b334c4d3f40b163170be518a9f6713cf26f87a7f7a1f58c3cbfb420"} Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.816859 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" podUID="0aaffef3-8bf8-4efc-8800-2db670031b2e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.818121 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" event={"ID":"a77a2ffb-9393-4cd9-9162-50678c269f57","Type":"ContainerStarted","Data":"fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b"} Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.818178 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" event={"ID":"a77a2ffb-9393-4cd9-9162-50678c269f57","Type":"ContainerStarted","Data":"1aee6f10112c1646c29ee6e72fd2b910ea3fd5efa487b15e21ae1eb4cbf94ecd"} Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.820463 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" event={"ID":"617e0ca7-0321-4c2b-ae31-914f5088b976","Type":"ContainerStarted","Data":"59ab4476de61f3a7df8bc1577bb201fefd3316c216f6deeb6d5161232b56b1a5"} Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.820515 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" event={"ID":"617e0ca7-0321-4c2b-ae31-914f5088b976","Type":"ContainerStarted","Data":"80797a3bf5c79454073432208404258c889011688baa3203f7a81c642c513399"} Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.834249 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.854702 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.878910 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.894897 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.915202 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.934060 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.955237 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.975175 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 06 07:19:05 crc kubenswrapper[4769]: I1006 07:19:05.994381 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.025189 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.036214 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.056277 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.074626 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.095101 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.116166 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.134950 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.154007 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.174358 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.196376 4769 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.217006 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.235236 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.254463 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.275454 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.328612 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-457g7\" (UID: \"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.352949 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwxlt\" (UniqueName: \"kubernetes.io/projected/b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b-kube-api-access-xwxlt\") pod \"ingress-operator-5b745b69d9-n8nmg\" (UID: \"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.368332 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9428\" (UniqueName: \"kubernetes.io/projected/91429be1-1ad0-4685-af4a-2184431a1d9f-kube-api-access-t9428\") pod \"oauth-openshift-558db77b4-kn4s5\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.373274 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-n8nmg\" (UID: \"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.397371 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4fwm\" (UniqueName: \"kubernetes.io/projected/7e16e210-5266-45ae-9f3d-c214c5c173a4-kube-api-access-z4fwm\") pod \"console-f9d7485db-lsg5p\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.418085 4769 request.go:700] Waited for 1.892117671s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.421770 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqg9m\" (UniqueName: \"kubernetes.io/projected/70104bc8-4b31-4151-8aae-ceae236c2359-kube-api-access-lqg9m\") pod \"etcd-operator-b45778765-px9vm\" (UID: \"70104bc8-4b31-4151-8aae-ceae236c2359\") " pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.429717 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.439679 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7hzv\" (UniqueName: \"kubernetes.io/projected/8f45d32a-bf9f-4fb1-8999-ea280b0518a9-kube-api-access-l7hzv\") pod \"apiserver-76f77b778f-8f7sk\" (UID: \"8f45d32a-bf9f-4fb1-8999-ea280b0518a9\") " pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.453708 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfl6b\" (UniqueName: \"kubernetes.io/projected/a6ae74f0-a980-4a1e-9d57-8c2a879f53ef-kube-api-access-xfl6b\") pod \"console-operator-58897d9998-s2fwl\" (UID: \"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef\") " pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.474987 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76308f77-d787-4795-91f3-15c13227c3e7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jp7kb\" (UID: \"76308f77-d787-4795-91f3-15c13227c3e7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.489045 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pw4b\" (UniqueName: \"kubernetes.io/projected/8045bc4d-cc45-45a6-a2fe-85107461350c-kube-api-access-8pw4b\") pod \"openshift-controller-manager-operator-756b6f6bc6-tjdr6\" (UID: \"8045bc4d-cc45-45a6-a2fe-85107461350c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.504696 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.509870 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnr6t\" (UniqueName: \"kubernetes.io/projected/c319d2c0-09fc-41ee-bef3-376730d981c8-kube-api-access-qnr6t\") pod \"migrator-59844c95c7-cnwm9\" (UID: \"c319d2c0-09fc-41ee-bef3-376730d981c8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cnwm9" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.529948 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.544931 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp47n\" (UniqueName: \"kubernetes.io/projected/6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe-kube-api-access-jp47n\") pod \"cluster-image-registry-operator-dc59b4c8b-457g7\" (UID: \"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.546882 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.551580 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.557747 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmvsq\" (UniqueName: \"kubernetes.io/projected/6c267793-2072-487d-8d1b-2e962921ceee-kube-api-access-xmvsq\") pod \"cluster-samples-operator-665b6dd947-c5rcq\" (UID: \"6c267793-2072-487d-8d1b-2e962921ceee\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.573311 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.582701 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cnwm9" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.633350 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-lsg5p"] Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.650239 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmhjx\" (UniqueName: \"kubernetes.io/projected/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-kube-api-access-wmhjx\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.650312 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/600730b8-8dad-4391-bcf2-9fe5a4ca21b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fgcjc\" (UID: \"600730b8-8dad-4391-bcf2-9fe5a4ca21b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.650382 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghrkz\" (UniqueName: \"kubernetes.io/projected/dfe17f06-5ce4-492b-b856-5b3b63c1b214-kube-api-access-ghrkz\") pod \"machine-config-controller-84d6567774-gn24g\" (UID: \"dfe17f06-5ce4-492b-b856-5b3b63c1b214\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.650413 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.650773 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/600730b8-8dad-4391-bcf2-9fe5a4ca21b8-images\") pod \"machine-api-operator-5694c8668f-fgcjc\" (UID: \"600730b8-8dad-4391-bcf2-9fe5a4ca21b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.650820 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b97g8\" (UniqueName: \"kubernetes.io/projected/600730b8-8dad-4391-bcf2-9fe5a4ca21b8-kube-api-access-b97g8\") pod \"machine-api-operator-5694c8668f-fgcjc\" (UID: \"600730b8-8dad-4391-bcf2-9fe5a4ca21b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.650856 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-serving-cert\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.650895 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4af768db-1836-4f0b-a47f-1b5b609c5703-trusted-ca\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.650915 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/178b093c-7638-4f43-99d9-04a4e4573dd1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cvx4t\" (UID: \"178b093c-7638-4f43-99d9-04a4e4573dd1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.650941 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfe17f06-5ce4-492b-b856-5b3b63c1b214-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gn24g\" (UID: \"dfe17f06-5ce4-492b-b856-5b3b63c1b214\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.652245 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9l6w\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-kube-api-access-v9l6w\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.652334 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/856dea71-a479-4575-86ce-6021c6e70e6d-serving-cert\") pod \"openshift-config-operator-7777fb866f-6bj7f\" (UID: \"856dea71-a479-4575-86ce-6021c6e70e6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.652362 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4af768db-1836-4f0b-a47f-1b5b609c5703-installation-pull-secrets\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.652386 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-audit-dir\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.652584 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b1a836-7d3e-4e40-8147-b59b826c3885-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w2sjg\" (UID: \"b5b1a836-7d3e-4e40-8147-b59b826c3885\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" Oct 06 07:19:06 crc kubenswrapper[4769]: W1006 07:19:06.652658 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e16e210_5266_45ae_9f3d_c214c5c173a4.slice/crio-7a78fc41b73a3e9d8ee2cd6e75203f8ab7a482dc9951c1182e37e9fee757f831 WatchSource:0}: Error finding container 7a78fc41b73a3e9d8ee2cd6e75203f8ab7a482dc9951c1182e37e9fee757f831: Status 404 returned error can't find the container with id 7a78fc41b73a3e9d8ee2cd6e75203f8ab7a482dc9951c1182e37e9fee757f831 Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.652702 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.652727 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-etcd-client\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653363 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab415f9f-c934-4c71-9702-57efb1e510ff-metrics-tls\") pod \"dns-operator-744455d44c-jvgm4\" (UID: \"ab415f9f-c934-4c71-9702-57efb1e510ff\") " pod="openshift-dns-operator/dns-operator-744455d44c-jvgm4" Oct 06 07:19:06 crc kubenswrapper[4769]: E1006 07:19:06.653411 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:07.153387232 +0000 UTC m=+143.677668379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653492 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-bound-sa-token\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653528 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-registry-tls\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653545 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5b1a836-7d3e-4e40-8147-b59b826c3885-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w2sjg\" (UID: \"b5b1a836-7d3e-4e40-8147-b59b826c3885\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653563 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn65c\" (UniqueName: \"kubernetes.io/projected/b3761f9e-65a2-4d35-8028-20f7492cea9a-kube-api-access-vn65c\") pod \"downloads-7954f5f757-l5vs7\" (UID: \"b3761f9e-65a2-4d35-8028-20f7492cea9a\") " pod="openshift-console/downloads-7954f5f757-l5vs7" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653581 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp48d\" (UniqueName: \"kubernetes.io/projected/ab415f9f-c934-4c71-9702-57efb1e510ff-kube-api-access-dp48d\") pod \"dns-operator-744455d44c-jvgm4\" (UID: \"ab415f9f-c934-4c71-9702-57efb1e510ff\") " pod="openshift-dns-operator/dns-operator-744455d44c-jvgm4" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653658 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfe17f06-5ce4-492b-b856-5b3b63c1b214-proxy-tls\") pod \"machine-config-controller-84d6567774-gn24g\" (UID: \"dfe17f06-5ce4-492b-b856-5b3b63c1b214\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653726 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600730b8-8dad-4391-bcf2-9fe5a4ca21b8-config\") pod \"machine-api-operator-5694c8668f-fgcjc\" (UID: \"600730b8-8dad-4391-bcf2-9fe5a4ca21b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653802 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5b1a836-7d3e-4e40-8147-b59b826c3885-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w2sjg\" (UID: \"b5b1a836-7d3e-4e40-8147-b59b826c3885\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653835 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4af768db-1836-4f0b-a47f-1b5b609c5703-ca-trust-extracted\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653852 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4af768db-1836-4f0b-a47f-1b5b609c5703-registry-certificates\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653879 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-encryption-config\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653921 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653937 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsnct\" (UniqueName: \"kubernetes.io/projected/178b093c-7638-4f43-99d9-04a4e4573dd1-kube-api-access-jsnct\") pod \"package-server-manager-789f6589d5-cvx4t\" (UID: \"178b093c-7638-4f43-99d9-04a4e4573dd1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653967 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/856dea71-a479-4575-86ce-6021c6e70e6d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6bj7f\" (UID: \"856dea71-a479-4575-86ce-6021c6e70e6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.653989 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-audit-policies\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.654014 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh2m7\" (UniqueName: \"kubernetes.io/projected/856dea71-a479-4575-86ce-6021c6e70e6d-kube-api-access-vh2m7\") pod \"openshift-config-operator-7777fb866f-6bj7f\" (UID: \"856dea71-a479-4575-86ce-6021c6e70e6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.670317 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.679341 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.690733 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.756582 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.756846 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4af768db-1836-4f0b-a47f-1b5b609c5703-ca-trust-extracted\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.756869 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4af768db-1836-4f0b-a47f-1b5b609c5703-registry-certificates\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.756895 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5a1e66fd-7451-4984-a953-2bb28b2f4cf0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rp2mj\" (UID: \"5a1e66fd-7451-4984-a953-2bb28b2f4cf0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rp2mj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.756953 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39a765da-9609-4f66-8444-51c13efe3d3c-config-volume\") pod \"collect-profiles-29328915-spmcr\" (UID: \"39a765da-9609-4f66-8444-51c13efe3d3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.756970 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/30232ce1-b859-4773-b763-648fb3399f4d-profile-collector-cert\") pod \"catalog-operator-68c6474976-x4wxj\" (UID: \"30232ce1-b859-4773-b763-648fb3399f4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757006 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsnct\" (UniqueName: \"kubernetes.io/projected/178b093c-7638-4f43-99d9-04a4e4573dd1-kube-api-access-jsnct\") pod \"package-server-manager-789f6589d5-cvx4t\" (UID: \"178b093c-7638-4f43-99d9-04a4e4573dd1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757024 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-audit-policies\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757048 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/856dea71-a479-4575-86ce-6021c6e70e6d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6bj7f\" (UID: \"856dea71-a479-4575-86ce-6021c6e70e6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757071 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7f9172c-31c8-4e15-a42b-64cdba575bbd-config-volume\") pod \"dns-default-pfz99\" (UID: \"d7f9172c-31c8-4e15-a42b-64cdba575bbd\") " pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757088 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9bab245c-413d-4d13-949e-02e428400df4-stats-auth\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757127 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39a765da-9609-4f66-8444-51c13efe3d3c-secret-volume\") pod \"collect-profiles-29328915-spmcr\" (UID: \"39a765da-9609-4f66-8444-51c13efe3d3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757173 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-mountpoint-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757197 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmhjx\" (UniqueName: \"kubernetes.io/projected/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-kube-api-access-wmhjx\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757216 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/600730b8-8dad-4391-bcf2-9fe5a4ca21b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fgcjc\" (UID: \"600730b8-8dad-4391-bcf2-9fe5a4ca21b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757233 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ca5afb6-c0da-4409-a7c1-2209e8b279fa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-m544s\" (UID: \"1ca5afb6-c0da-4409-a7c1-2209e8b279fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757249 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jj6q\" (UniqueName: \"kubernetes.io/projected/0d67c599-54fe-4062-b4d4-b5319c3902be-kube-api-access-7jj6q\") pod \"machine-config-operator-74547568cd-k666r\" (UID: \"0d67c599-54fe-4062-b4d4-b5319c3902be\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757268 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-csi-data-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757285 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/600730b8-8dad-4391-bcf2-9fe5a4ca21b8-images\") pod \"machine-api-operator-5694c8668f-fgcjc\" (UID: \"600730b8-8dad-4391-bcf2-9fe5a4ca21b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757302 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghrkz\" (UniqueName: \"kubernetes.io/projected/dfe17f06-5ce4-492b-b856-5b3b63c1b214-kube-api-access-ghrkz\") pod \"machine-config-controller-84d6567774-gn24g\" (UID: \"dfe17f06-5ce4-492b-b856-5b3b63c1b214\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757319 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgrj8\" (UniqueName: \"kubernetes.io/projected/687216bf-bfb2-4eed-b2ff-8b1bc02d3516-kube-api-access-zgrj8\") pod \"machine-config-server-8vq2f\" (UID: \"687216bf-bfb2-4eed-b2ff-8b1bc02d3516\") " pod="openshift-machine-config-operator/machine-config-server-8vq2f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757335 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzb9q\" (UniqueName: \"kubernetes.io/projected/582d0a7e-6f76-4662-b311-658438fa3dc1-kube-api-access-bzb9q\") pod \"ingress-canary-cstzc\" (UID: \"582d0a7e-6f76-4662-b311-658438fa3dc1\") " pod="openshift-ingress-canary/ingress-canary-cstzc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757353 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-serving-cert\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757368 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-registration-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757394 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4af768db-1836-4f0b-a47f-1b5b609c5703-trusted-ca\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757411 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/178b093c-7638-4f43-99d9-04a4e4573dd1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cvx4t\" (UID: \"178b093c-7638-4f43-99d9-04a4e4573dd1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757450 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-plugins-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757489 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86m2\" (UniqueName: \"kubernetes.io/projected/18c78bde-92f9-4e3a-998e-659f978d26a6-kube-api-access-d86m2\") pod \"olm-operator-6b444d44fb-bv9ks\" (UID: \"18c78bde-92f9-4e3a-998e-659f978d26a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757507 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/582d0a7e-6f76-4662-b311-658438fa3dc1-cert\") pod \"ingress-canary-cstzc\" (UID: \"582d0a7e-6f76-4662-b311-658438fa3dc1\") " pod="openshift-ingress-canary/ingress-canary-cstzc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757535 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca5afb6-c0da-4409-a7c1-2209e8b279fa-config\") pod \"kube-apiserver-operator-766d6c64bb-m544s\" (UID: \"1ca5afb6-c0da-4409-a7c1-2209e8b279fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757561 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4af768db-1836-4f0b-a47f-1b5b609c5703-installation-pull-secrets\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757579 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/baa140a6-2fa8-4527-80c5-3362db13fb76-signing-cabundle\") pod \"service-ca-9c57cc56f-f9jjc\" (UID: \"baa140a6-2fa8-4527-80c5-3362db13fb76\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757636 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b1a836-7d3e-4e40-8147-b59b826c3885-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w2sjg\" (UID: \"b5b1a836-7d3e-4e40-8147-b59b826c3885\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757667 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18c78bde-92f9-4e3a-998e-659f978d26a6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bv9ks\" (UID: \"18c78bde-92f9-4e3a-998e-659f978d26a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757684 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/baa140a6-2fa8-4527-80c5-3362db13fb76-signing-key\") pod \"service-ca-9c57cc56f-f9jjc\" (UID: \"baa140a6-2fa8-4527-80c5-3362db13fb76\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757719 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-etcd-client\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757745 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab415f9f-c934-4c71-9702-57efb1e510ff-metrics-tls\") pod \"dns-operator-744455d44c-jvgm4\" (UID: \"ab415f9f-c934-4c71-9702-57efb1e510ff\") " pod="openshift-dns-operator/dns-operator-744455d44c-jvgm4" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757770 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-bound-sa-token\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757791 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-registry-tls\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757808 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn65c\" (UniqueName: \"kubernetes.io/projected/b3761f9e-65a2-4d35-8028-20f7492cea9a-kube-api-access-vn65c\") pod \"downloads-7954f5f757-l5vs7\" (UID: \"b3761f9e-65a2-4d35-8028-20f7492cea9a\") " pod="openshift-console/downloads-7954f5f757-l5vs7" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757842 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600730b8-8dad-4391-bcf2-9fe5a4ca21b8-config\") pod \"machine-api-operator-5694c8668f-fgcjc\" (UID: \"600730b8-8dad-4391-bcf2-9fe5a4ca21b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757856 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfe17f06-5ce4-492b-b856-5b3b63c1b214-proxy-tls\") pod \"machine-config-controller-84d6567774-gn24g\" (UID: \"dfe17f06-5ce4-492b-b856-5b3b63c1b214\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757893 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9skc\" (UniqueName: \"kubernetes.io/projected/dc476882-04fd-4566-acb1-ed94a705c94f-kube-api-access-d9skc\") pod \"packageserver-d55dfcdfc-grgwk\" (UID: \"dc476882-04fd-4566-acb1-ed94a705c94f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757909 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qsjc\" (UniqueName: \"kubernetes.io/projected/baa140a6-2fa8-4527-80c5-3362db13fb76-kube-api-access-7qsjc\") pod \"service-ca-9c57cc56f-f9jjc\" (UID: \"baa140a6-2fa8-4527-80c5-3362db13fb76\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757932 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/687216bf-bfb2-4eed-b2ff-8b1bc02d3516-node-bootstrap-token\") pod \"machine-config-server-8vq2f\" (UID: \"687216bf-bfb2-4eed-b2ff-8b1bc02d3516\") " pod="openshift-machine-config-operator/machine-config-server-8vq2f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757956 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kslg2\" (UniqueName: \"kubernetes.io/projected/340a9fe0-d7c5-4b36-868d-f70cef5dd2b2-kube-api-access-kslg2\") pod \"service-ca-operator-777779d784-lmbcj\" (UID: \"340a9fe0-d7c5-4b36-868d-f70cef5dd2b2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757981 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/340a9fe0-d7c5-4b36-868d-f70cef5dd2b2-config\") pod \"service-ca-operator-777779d784-lmbcj\" (UID: \"340a9fe0-d7c5-4b36-868d-f70cef5dd2b2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.757998 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/687216bf-bfb2-4eed-b2ff-8b1bc02d3516-certs\") pod \"machine-config-server-8vq2f\" (UID: \"687216bf-bfb2-4eed-b2ff-8b1bc02d3516\") " pod="openshift-machine-config-operator/machine-config-server-8vq2f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758017 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-encryption-config\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758034 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jw7gb\" (UID: \"9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758132 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dc476882-04fd-4566-acb1-ed94a705c94f-tmpfs\") pod \"packageserver-d55dfcdfc-grgwk\" (UID: \"dc476882-04fd-4566-acb1-ed94a705c94f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758152 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jm2xh\" (UID: \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758180 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758196 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjkw\" (UniqueName: \"kubernetes.io/projected/39a765da-9609-4f66-8444-51c13efe3d3c-kube-api-access-bgjkw\") pod \"collect-profiles-29328915-spmcr\" (UID: \"39a765da-9609-4f66-8444-51c13efe3d3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758213 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5xd4\" (UniqueName: \"kubernetes.io/projected/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-kube-api-access-r5xd4\") pod \"marketplace-operator-79b997595-jm2xh\" (UID: \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758273 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9bab245c-413d-4d13-949e-02e428400df4-default-certificate\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758294 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnmsx\" (UniqueName: \"kubernetes.io/projected/5a1e66fd-7451-4984-a953-2bb28b2f4cf0-kube-api-access-hnmsx\") pod \"multus-admission-controller-857f4d67dd-rp2mj\" (UID: \"5a1e66fd-7451-4984-a953-2bb28b2f4cf0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rp2mj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758371 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh2m7\" (UniqueName: \"kubernetes.io/projected/856dea71-a479-4575-86ce-6021c6e70e6d-kube-api-access-vh2m7\") pod \"openshift-config-operator-7777fb866f-6bj7f\" (UID: \"856dea71-a479-4575-86ce-6021c6e70e6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758393 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca5afb6-c0da-4409-a7c1-2209e8b279fa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-m544s\" (UID: \"1ca5afb6-c0da-4409-a7c1-2209e8b279fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758412 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkkd4\" (UniqueName: \"kubernetes.io/projected/500255b4-789b-43dd-ba43-067682532ae9-kube-api-access-zkkd4\") pod \"control-plane-machine-set-operator-78cbb6b69f-bstmm\" (UID: \"500255b4-789b-43dd-ba43-067682532ae9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758446 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d67c599-54fe-4062-b4d4-b5319c3902be-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k666r\" (UID: \"0d67c599-54fe-4062-b4d4-b5319c3902be\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758498 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4czk4\" (UniqueName: \"kubernetes.io/projected/9bab245c-413d-4d13-949e-02e428400df4-kube-api-access-4czk4\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758541 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bab245c-413d-4d13-949e-02e428400df4-service-ca-bundle\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758573 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg876\" (UniqueName: \"kubernetes.io/projected/9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784-kube-api-access-xg876\") pod \"kube-storage-version-migrator-operator-b67b599dd-jw7gb\" (UID: \"9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758589 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/340a9fe0-d7c5-4b36-868d-f70cef5dd2b2-serving-cert\") pod \"service-ca-operator-777779d784-lmbcj\" (UID: \"340a9fe0-d7c5-4b36-868d-f70cef5dd2b2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758617 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758633 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b97g8\" (UniqueName: \"kubernetes.io/projected/600730b8-8dad-4391-bcf2-9fe5a4ca21b8-kube-api-access-b97g8\") pod \"machine-api-operator-5694c8668f-fgcjc\" (UID: \"600730b8-8dad-4391-bcf2-9fe5a4ca21b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758679 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jw7gb\" (UID: \"9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758709 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfe17f06-5ce4-492b-b856-5b3b63c1b214-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gn24g\" (UID: \"dfe17f06-5ce4-492b-b856-5b3b63c1b214\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758732 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7f9172c-31c8-4e15-a42b-64cdba575bbd-metrics-tls\") pod \"dns-default-pfz99\" (UID: \"d7f9172c-31c8-4e15-a42b-64cdba575bbd\") " pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758770 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18c78bde-92f9-4e3a-998e-659f978d26a6-srv-cert\") pod \"olm-operator-6b444d44fb-bv9ks\" (UID: \"18c78bde-92f9-4e3a-998e-659f978d26a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758787 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9l6w\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-kube-api-access-v9l6w\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758803 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/856dea71-a479-4575-86ce-6021c6e70e6d-serving-cert\") pod \"openshift-config-operator-7777fb866f-6bj7f\" (UID: \"856dea71-a479-4575-86ce-6021c6e70e6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758821 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-audit-dir\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758855 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/500255b4-789b-43dd-ba43-067682532ae9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bstmm\" (UID: \"500255b4-789b-43dd-ba43-067682532ae9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758879 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtlpn\" (UniqueName: \"kubernetes.io/projected/30232ce1-b859-4773-b763-648fb3399f4d-kube-api-access-qtlpn\") pod \"catalog-operator-68c6474976-x4wxj\" (UID: \"30232ce1-b859-4773-b763-648fb3399f4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758899 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4zp2\" (UniqueName: \"kubernetes.io/projected/d7f9172c-31c8-4e15-a42b-64cdba575bbd-kube-api-access-k4zp2\") pod \"dns-default-pfz99\" (UID: \"d7f9172c-31c8-4e15-a42b-64cdba575bbd\") " pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758921 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc476882-04fd-4566-acb1-ed94a705c94f-apiservice-cert\") pod \"packageserver-d55dfcdfc-grgwk\" (UID: \"dc476882-04fd-4566-acb1-ed94a705c94f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758947 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/30232ce1-b859-4773-b763-648fb3399f4d-srv-cert\") pod \"catalog-operator-68c6474976-x4wxj\" (UID: \"30232ce1-b859-4773-b763-648fb3399f4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758964 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d67c599-54fe-4062-b4d4-b5319c3902be-proxy-tls\") pod \"machine-config-operator-74547568cd-k666r\" (UID: \"0d67c599-54fe-4062-b4d4-b5319c3902be\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.758980 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jm2xh\" (UID: \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.759008 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5b1a836-7d3e-4e40-8147-b59b826c3885-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w2sjg\" (UID: \"b5b1a836-7d3e-4e40-8147-b59b826c3885\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.759027 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp48d\" (UniqueName: \"kubernetes.io/projected/ab415f9f-c934-4c71-9702-57efb1e510ff-kube-api-access-dp48d\") pod \"dns-operator-744455d44c-jvgm4\" (UID: \"ab415f9f-c934-4c71-9702-57efb1e510ff\") " pod="openshift-dns-operator/dns-operator-744455d44c-jvgm4" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.759047 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bab245c-413d-4d13-949e-02e428400df4-metrics-certs\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.759112 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5b1a836-7d3e-4e40-8147-b59b826c3885-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w2sjg\" (UID: \"b5b1a836-7d3e-4e40-8147-b59b826c3885\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.759136 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8599\" (UniqueName: \"kubernetes.io/projected/f81eb0e9-5f14-40e6-a457-af787a30fca4-kube-api-access-v8599\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.759177 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-socket-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.759204 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0d67c599-54fe-4062-b4d4-b5319c3902be-images\") pod \"machine-config-operator-74547568cd-k666r\" (UID: \"0d67c599-54fe-4062-b4d4-b5319c3902be\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.759220 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc476882-04fd-4566-acb1-ed94a705c94f-webhook-cert\") pod \"packageserver-d55dfcdfc-grgwk\" (UID: \"dc476882-04fd-4566-acb1-ed94a705c94f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:06 crc kubenswrapper[4769]: E1006 07:19:06.760076 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:07.260040807 +0000 UTC m=+143.784322104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.761581 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4af768db-1836-4f0b-a47f-1b5b609c5703-trusted-ca\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.761849 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4af768db-1836-4f0b-a47f-1b5b609c5703-ca-trust-extracted\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.763979 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4af768db-1836-4f0b-a47f-1b5b609c5703-registry-certificates\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.764235 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-audit-policies\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.764727 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/856dea71-a479-4575-86ce-6021c6e70e6d-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6bj7f\" (UID: \"856dea71-a479-4575-86ce-6021c6e70e6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.765571 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.768986 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/600730b8-8dad-4391-bcf2-9fe5a4ca21b8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-fgcjc\" (UID: \"600730b8-8dad-4391-bcf2-9fe5a4ca21b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.770801 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/600730b8-8dad-4391-bcf2-9fe5a4ca21b8-images\") pod \"machine-api-operator-5694c8668f-fgcjc\" (UID: \"600730b8-8dad-4391-bcf2-9fe5a4ca21b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.771232 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600730b8-8dad-4391-bcf2-9fe5a4ca21b8-config\") pod \"machine-api-operator-5694c8668f-fgcjc\" (UID: \"600730b8-8dad-4391-bcf2-9fe5a4ca21b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.777892 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-audit-dir\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.778076 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.779614 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dfe17f06-5ce4-492b-b856-5b3b63c1b214-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-gn24g\" (UID: \"dfe17f06-5ce4-492b-b856-5b3b63c1b214\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.780272 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5b1a836-7d3e-4e40-8147-b59b826c3885-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w2sjg\" (UID: \"b5b1a836-7d3e-4e40-8147-b59b826c3885\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.781628 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-registry-tls\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.782536 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dfe17f06-5ce4-492b-b856-5b3b63c1b214-proxy-tls\") pod \"machine-config-controller-84d6567774-gn24g\" (UID: \"dfe17f06-5ce4-492b-b856-5b3b63c1b214\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.783201 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/178b093c-7638-4f43-99d9-04a4e4573dd1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cvx4t\" (UID: \"178b093c-7638-4f43-99d9-04a4e4573dd1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.784728 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4af768db-1836-4f0b-a47f-1b5b609c5703-installation-pull-secrets\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.787359 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5b1a836-7d3e-4e40-8147-b59b826c3885-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w2sjg\" (UID: \"b5b1a836-7d3e-4e40-8147-b59b826c3885\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.788768 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-etcd-client\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.788790 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-encryption-config\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.789744 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/856dea71-a479-4575-86ce-6021c6e70e6d-serving-cert\") pod \"openshift-config-operator-7777fb866f-6bj7f\" (UID: \"856dea71-a479-4575-86ce-6021c6e70e6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.790330 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab415f9f-c934-4c71-9702-57efb1e510ff-metrics-tls\") pod \"dns-operator-744455d44c-jvgm4\" (UID: \"ab415f9f-c934-4c71-9702-57efb1e510ff\") " pod="openshift-dns-operator/dns-operator-744455d44c-jvgm4" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.791857 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-serving-cert\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.793390 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.795032 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-s2fwl"] Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.796681 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmhjx\" (UniqueName: \"kubernetes.io/projected/c0366b1e-5791-4b3c-955d-4adbeb2f9ccc-kube-api-access-wmhjx\") pod \"apiserver-7bbb656c7d-zhqnp\" (UID: \"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.814041 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsnct\" (UniqueName: \"kubernetes.io/projected/178b093c-7638-4f43-99d9-04a4e4573dd1-kube-api-access-jsnct\") pod \"package-server-manager-789f6589d5-cvx4t\" (UID: \"178b093c-7638-4f43-99d9-04a4e4573dd1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.831493 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5b1a836-7d3e-4e40-8147-b59b826c3885-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-w2sjg\" (UID: \"b5b1a836-7d3e-4e40-8147-b59b826c3885\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.834459 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" event={"ID":"73e62be3-b28a-4ff2-846c-8434ade8d012","Type":"ContainerStarted","Data":"9844014bd256feb53907d6838a588e877aaf3b8a5f582278ba0bfa6f94c49dd4"} Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.838002 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lsg5p" event={"ID":"7e16e210-5266-45ae-9f3d-c214c5c173a4","Type":"ContainerStarted","Data":"7a78fc41b73a3e9d8ee2cd6e75203f8ab7a482dc9951c1182e37e9fee757f831"} Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.840095 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" event={"ID":"fa049f4f-024f-4f36-a7e6-9583b1501649","Type":"ContainerStarted","Data":"79fb665de7cad380b73e70e5ea74c54f0a3b27972fa895f2e3731527441c0a9b"} Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.842984 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.847742 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg"] Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.847937 4769 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bc478 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.847983 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" podUID="a77a2ffb-9393-4cd9-9162-50678c269f57" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.859201 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860292 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860322 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/500255b4-789b-43dd-ba43-067682532ae9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bstmm\" (UID: \"500255b4-789b-43dd-ba43-067682532ae9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860345 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtlpn\" (UniqueName: \"kubernetes.io/projected/30232ce1-b859-4773-b763-648fb3399f4d-kube-api-access-qtlpn\") pod \"catalog-operator-68c6474976-x4wxj\" (UID: \"30232ce1-b859-4773-b763-648fb3399f4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860363 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4zp2\" (UniqueName: \"kubernetes.io/projected/d7f9172c-31c8-4e15-a42b-64cdba575bbd-kube-api-access-k4zp2\") pod \"dns-default-pfz99\" (UID: \"d7f9172c-31c8-4e15-a42b-64cdba575bbd\") " pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860380 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc476882-04fd-4566-acb1-ed94a705c94f-apiservice-cert\") pod \"packageserver-d55dfcdfc-grgwk\" (UID: \"dc476882-04fd-4566-acb1-ed94a705c94f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860395 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/30232ce1-b859-4773-b763-648fb3399f4d-srv-cert\") pod \"catalog-operator-68c6474976-x4wxj\" (UID: \"30232ce1-b859-4773-b763-648fb3399f4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860450 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d67c599-54fe-4062-b4d4-b5319c3902be-proxy-tls\") pod \"machine-config-operator-74547568cd-k666r\" (UID: \"0d67c599-54fe-4062-b4d4-b5319c3902be\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860468 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jm2xh\" (UID: \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860490 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bab245c-413d-4d13-949e-02e428400df4-metrics-certs\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860506 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8599\" (UniqueName: \"kubernetes.io/projected/f81eb0e9-5f14-40e6-a457-af787a30fca4-kube-api-access-v8599\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860537 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-socket-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860557 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0d67c599-54fe-4062-b4d4-b5319c3902be-images\") pod \"machine-config-operator-74547568cd-k666r\" (UID: \"0d67c599-54fe-4062-b4d4-b5319c3902be\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860578 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc476882-04fd-4566-acb1-ed94a705c94f-webhook-cert\") pod \"packageserver-d55dfcdfc-grgwk\" (UID: \"dc476882-04fd-4566-acb1-ed94a705c94f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860597 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5a1e66fd-7451-4984-a953-2bb28b2f4cf0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rp2mj\" (UID: \"5a1e66fd-7451-4984-a953-2bb28b2f4cf0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rp2mj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860622 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39a765da-9609-4f66-8444-51c13efe3d3c-config-volume\") pod \"collect-profiles-29328915-spmcr\" (UID: \"39a765da-9609-4f66-8444-51c13efe3d3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860639 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/30232ce1-b859-4773-b763-648fb3399f4d-profile-collector-cert\") pod \"catalog-operator-68c6474976-x4wxj\" (UID: \"30232ce1-b859-4773-b763-648fb3399f4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860658 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7f9172c-31c8-4e15-a42b-64cdba575bbd-config-volume\") pod \"dns-default-pfz99\" (UID: \"d7f9172c-31c8-4e15-a42b-64cdba575bbd\") " pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860682 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9bab245c-413d-4d13-949e-02e428400df4-stats-auth\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860701 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39a765da-9609-4f66-8444-51c13efe3d3c-secret-volume\") pod \"collect-profiles-29328915-spmcr\" (UID: \"39a765da-9609-4f66-8444-51c13efe3d3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860717 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-mountpoint-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860738 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ca5afb6-c0da-4409-a7c1-2209e8b279fa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-m544s\" (UID: \"1ca5afb6-c0da-4409-a7c1-2209e8b279fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860755 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jj6q\" (UniqueName: \"kubernetes.io/projected/0d67c599-54fe-4062-b4d4-b5319c3902be-kube-api-access-7jj6q\") pod \"machine-config-operator-74547568cd-k666r\" (UID: \"0d67c599-54fe-4062-b4d4-b5319c3902be\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860773 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-csi-data-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860796 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgrj8\" (UniqueName: \"kubernetes.io/projected/687216bf-bfb2-4eed-b2ff-8b1bc02d3516-kube-api-access-zgrj8\") pod \"machine-config-server-8vq2f\" (UID: \"687216bf-bfb2-4eed-b2ff-8b1bc02d3516\") " pod="openshift-machine-config-operator/machine-config-server-8vq2f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860815 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzb9q\" (UniqueName: \"kubernetes.io/projected/582d0a7e-6f76-4662-b311-658438fa3dc1-kube-api-access-bzb9q\") pod \"ingress-canary-cstzc\" (UID: \"582d0a7e-6f76-4662-b311-658438fa3dc1\") " pod="openshift-ingress-canary/ingress-canary-cstzc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860837 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-registration-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860852 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-plugins-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860870 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d86m2\" (UniqueName: \"kubernetes.io/projected/18c78bde-92f9-4e3a-998e-659f978d26a6-kube-api-access-d86m2\") pod \"olm-operator-6b444d44fb-bv9ks\" (UID: \"18c78bde-92f9-4e3a-998e-659f978d26a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860888 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/582d0a7e-6f76-4662-b311-658438fa3dc1-cert\") pod \"ingress-canary-cstzc\" (UID: \"582d0a7e-6f76-4662-b311-658438fa3dc1\") " pod="openshift-ingress-canary/ingress-canary-cstzc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860905 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca5afb6-c0da-4409-a7c1-2209e8b279fa-config\") pod \"kube-apiserver-operator-766d6c64bb-m544s\" (UID: \"1ca5afb6-c0da-4409-a7c1-2209e8b279fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860923 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/baa140a6-2fa8-4527-80c5-3362db13fb76-signing-cabundle\") pod \"service-ca-9c57cc56f-f9jjc\" (UID: \"baa140a6-2fa8-4527-80c5-3362db13fb76\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860939 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/baa140a6-2fa8-4527-80c5-3362db13fb76-signing-key\") pod \"service-ca-9c57cc56f-f9jjc\" (UID: \"baa140a6-2fa8-4527-80c5-3362db13fb76\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.860957 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18c78bde-92f9-4e3a-998e-659f978d26a6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bv9ks\" (UID: \"18c78bde-92f9-4e3a-998e-659f978d26a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861000 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9skc\" (UniqueName: \"kubernetes.io/projected/dc476882-04fd-4566-acb1-ed94a705c94f-kube-api-access-d9skc\") pod \"packageserver-d55dfcdfc-grgwk\" (UID: \"dc476882-04fd-4566-acb1-ed94a705c94f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861017 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qsjc\" (UniqueName: \"kubernetes.io/projected/baa140a6-2fa8-4527-80c5-3362db13fb76-kube-api-access-7qsjc\") pod \"service-ca-9c57cc56f-f9jjc\" (UID: \"baa140a6-2fa8-4527-80c5-3362db13fb76\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861034 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/687216bf-bfb2-4eed-b2ff-8b1bc02d3516-node-bootstrap-token\") pod \"machine-config-server-8vq2f\" (UID: \"687216bf-bfb2-4eed-b2ff-8b1bc02d3516\") " pod="openshift-machine-config-operator/machine-config-server-8vq2f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861064 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kslg2\" (UniqueName: \"kubernetes.io/projected/340a9fe0-d7c5-4b36-868d-f70cef5dd2b2-kube-api-access-kslg2\") pod \"service-ca-operator-777779d784-lmbcj\" (UID: \"340a9fe0-d7c5-4b36-868d-f70cef5dd2b2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861081 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/340a9fe0-d7c5-4b36-868d-f70cef5dd2b2-config\") pod \"service-ca-operator-777779d784-lmbcj\" (UID: \"340a9fe0-d7c5-4b36-868d-f70cef5dd2b2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861099 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/687216bf-bfb2-4eed-b2ff-8b1bc02d3516-certs\") pod \"machine-config-server-8vq2f\" (UID: \"687216bf-bfb2-4eed-b2ff-8b1bc02d3516\") " pod="openshift-machine-config-operator/machine-config-server-8vq2f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861123 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jw7gb\" (UID: \"9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861141 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dc476882-04fd-4566-acb1-ed94a705c94f-tmpfs\") pod \"packageserver-d55dfcdfc-grgwk\" (UID: \"dc476882-04fd-4566-acb1-ed94a705c94f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861158 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jm2xh\" (UID: \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861176 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgjkw\" (UniqueName: \"kubernetes.io/projected/39a765da-9609-4f66-8444-51c13efe3d3c-kube-api-access-bgjkw\") pod \"collect-profiles-29328915-spmcr\" (UID: \"39a765da-9609-4f66-8444-51c13efe3d3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861195 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5xd4\" (UniqueName: \"kubernetes.io/projected/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-kube-api-access-r5xd4\") pod \"marketplace-operator-79b997595-jm2xh\" (UID: \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861212 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9bab245c-413d-4d13-949e-02e428400df4-default-certificate\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861232 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnmsx\" (UniqueName: \"kubernetes.io/projected/5a1e66fd-7451-4984-a953-2bb28b2f4cf0-kube-api-access-hnmsx\") pod \"multus-admission-controller-857f4d67dd-rp2mj\" (UID: \"5a1e66fd-7451-4984-a953-2bb28b2f4cf0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rp2mj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861249 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkkd4\" (UniqueName: \"kubernetes.io/projected/500255b4-789b-43dd-ba43-067682532ae9-kube-api-access-zkkd4\") pod \"control-plane-machine-set-operator-78cbb6b69f-bstmm\" (UID: \"500255b4-789b-43dd-ba43-067682532ae9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861271 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca5afb6-c0da-4409-a7c1-2209e8b279fa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-m544s\" (UID: \"1ca5afb6-c0da-4409-a7c1-2209e8b279fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861288 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d67c599-54fe-4062-b4d4-b5319c3902be-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k666r\" (UID: \"0d67c599-54fe-4062-b4d4-b5319c3902be\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861321 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-registration-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861334 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4czk4\" (UniqueName: \"kubernetes.io/projected/9bab245c-413d-4d13-949e-02e428400df4-kube-api-access-4czk4\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861355 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bab245c-413d-4d13-949e-02e428400df4-service-ca-bundle\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861374 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg876\" (UniqueName: \"kubernetes.io/projected/9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784-kube-api-access-xg876\") pod \"kube-storage-version-migrator-operator-b67b599dd-jw7gb\" (UID: \"9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861388 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/340a9fe0-d7c5-4b36-868d-f70cef5dd2b2-serving-cert\") pod \"service-ca-operator-777779d784-lmbcj\" (UID: \"340a9fe0-d7c5-4b36-868d-f70cef5dd2b2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861428 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jw7gb\" (UID: \"9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7f9172c-31c8-4e15-a42b-64cdba575bbd-metrics-tls\") pod \"dns-default-pfz99\" (UID: \"d7f9172c-31c8-4e15-a42b-64cdba575bbd\") " pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861464 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18c78bde-92f9-4e3a-998e-659f978d26a6-srv-cert\") pod \"olm-operator-6b444d44fb-bv9ks\" (UID: \"18c78bde-92f9-4e3a-998e-659f978d26a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.861476 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7f9172c-31c8-4e15-a42b-64cdba575bbd-config-volume\") pod \"dns-default-pfz99\" (UID: \"d7f9172c-31c8-4e15-a42b-64cdba575bbd\") " pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.862085 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-socket-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.865742 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dc476882-04fd-4566-acb1-ed94a705c94f-tmpfs\") pod \"packageserver-d55dfcdfc-grgwk\" (UID: \"dc476882-04fd-4566-acb1-ed94a705c94f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.865815 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-plugins-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.866691 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh2m7\" (UniqueName: \"kubernetes.io/projected/856dea71-a479-4575-86ce-6021c6e70e6d-kube-api-access-vh2m7\") pod \"openshift-config-operator-7777fb866f-6bj7f\" (UID: \"856dea71-a479-4575-86ce-6021c6e70e6d\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.867451 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39a765da-9609-4f66-8444-51c13efe3d3c-config-volume\") pod \"collect-profiles-29328915-spmcr\" (UID: \"39a765da-9609-4f66-8444-51c13efe3d3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.868500 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jm2xh\" (UID: \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.869202 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ca5afb6-c0da-4409-a7c1-2209e8b279fa-config\") pod \"kube-apiserver-operator-766d6c64bb-m544s\" (UID: \"1ca5afb6-c0da-4409-a7c1-2209e8b279fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.870591 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/baa140a6-2fa8-4527-80c5-3362db13fb76-signing-cabundle\") pod \"service-ca-9c57cc56f-f9jjc\" (UID: \"baa140a6-2fa8-4527-80c5-3362db13fb76\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.871536 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-csi-data-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.871728 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/18c78bde-92f9-4e3a-998e-659f978d26a6-srv-cert\") pod \"olm-operator-6b444d44fb-bv9ks\" (UID: \"18c78bde-92f9-4e3a-998e-659f978d26a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.871863 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f81eb0e9-5f14-40e6-a457-af787a30fca4-mountpoint-dir\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.871958 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/340a9fe0-d7c5-4b36-868d-f70cef5dd2b2-config\") pod \"service-ca-operator-777779d784-lmbcj\" (UID: \"340a9fe0-d7c5-4b36-868d-f70cef5dd2b2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.872250 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39a765da-9609-4f66-8444-51c13efe3d3c-secret-volume\") pod \"collect-profiles-29328915-spmcr\" (UID: \"39a765da-9609-4f66-8444-51c13efe3d3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:06 crc kubenswrapper[4769]: E1006 07:19:06.873034 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:07.373015668 +0000 UTC m=+143.897296815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.873910 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bab245c-413d-4d13-949e-02e428400df4-service-ca-bundle\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.874147 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-jw7gb\" (UID: \"9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.874592 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/500255b4-789b-43dd-ba43-067682532ae9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bstmm\" (UID: \"500255b4-789b-43dd-ba43-067682532ae9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.879087 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0d67c599-54fe-4062-b4d4-b5319c3902be-auth-proxy-config\") pod \"machine-config-operator-74547568cd-k666r\" (UID: \"0d67c599-54fe-4062-b4d4-b5319c3902be\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.879086 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0d67c599-54fe-4062-b4d4-b5319c3902be-images\") pod \"machine-config-operator-74547568cd-k666r\" (UID: \"0d67c599-54fe-4062-b4d4-b5319c3902be\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.882926 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc476882-04fd-4566-acb1-ed94a705c94f-apiservice-cert\") pod \"packageserver-d55dfcdfc-grgwk\" (UID: \"dc476882-04fd-4566-acb1-ed94a705c94f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.886304 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5a1e66fd-7451-4984-a953-2bb28b2f4cf0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-rp2mj\" (UID: \"5a1e66fd-7451-4984-a953-2bb28b2f4cf0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rp2mj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.886850 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/687216bf-bfb2-4eed-b2ff-8b1bc02d3516-node-bootstrap-token\") pod \"machine-config-server-8vq2f\" (UID: \"687216bf-bfb2-4eed-b2ff-8b1bc02d3516\") " pod="openshift-machine-config-operator/machine-config-server-8vq2f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.887302 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jm2xh\" (UID: \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.887762 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/30232ce1-b859-4773-b763-648fb3399f4d-srv-cert\") pod \"catalog-operator-68c6474976-x4wxj\" (UID: \"30232ce1-b859-4773-b763-648fb3399f4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.888082 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-bound-sa-token\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.888571 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-jw7gb\" (UID: \"9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.892582 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/18c78bde-92f9-4e3a-998e-659f978d26a6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bv9ks\" (UID: \"18c78bde-92f9-4e3a-998e-659f978d26a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.898943 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/340a9fe0-d7c5-4b36-868d-f70cef5dd2b2-serving-cert\") pod \"service-ca-operator-777779d784-lmbcj\" (UID: \"340a9fe0-d7c5-4b36-868d-f70cef5dd2b2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.899022 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/9bab245c-413d-4d13-949e-02e428400df4-stats-auth\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.899361 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ca5afb6-c0da-4409-a7c1-2209e8b279fa-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-m544s\" (UID: \"1ca5afb6-c0da-4409-a7c1-2209e8b279fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.899961 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/582d0a7e-6f76-4662-b311-658438fa3dc1-cert\") pod \"ingress-canary-cstzc\" (UID: \"582d0a7e-6f76-4662-b311-658438fa3dc1\") " pod="openshift-ingress-canary/ingress-canary-cstzc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.900501 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/30232ce1-b859-4773-b763-648fb3399f4d-profile-collector-cert\") pod \"catalog-operator-68c6474976-x4wxj\" (UID: \"30232ce1-b859-4773-b763-648fb3399f4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.903719 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9bab245c-413d-4d13-949e-02e428400df4-metrics-certs\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.903930 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/baa140a6-2fa8-4527-80c5-3362db13fb76-signing-key\") pod \"service-ca-9c57cc56f-f9jjc\" (UID: \"baa140a6-2fa8-4527-80c5-3362db13fb76\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.904324 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc476882-04fd-4566-acb1-ed94a705c94f-webhook-cert\") pod \"packageserver-d55dfcdfc-grgwk\" (UID: \"dc476882-04fd-4566-acb1-ed94a705c94f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.905260 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d7f9172c-31c8-4e15-a42b-64cdba575bbd-metrics-tls\") pod \"dns-default-pfz99\" (UID: \"d7f9172c-31c8-4e15-a42b-64cdba575bbd\") " pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.906130 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d67c599-54fe-4062-b4d4-b5319c3902be-proxy-tls\") pod \"machine-config-operator-74547568cd-k666r\" (UID: \"0d67c599-54fe-4062-b4d4-b5319c3902be\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.907456 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/687216bf-bfb2-4eed-b2ff-8b1bc02d3516-certs\") pod \"machine-config-server-8vq2f\" (UID: \"687216bf-bfb2-4eed-b2ff-8b1bc02d3516\") " pod="openshift-machine-config-operator/machine-config-server-8vq2f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.910365 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn65c\" (UniqueName: \"kubernetes.io/projected/b3761f9e-65a2-4d35-8028-20f7492cea9a-kube-api-access-vn65c\") pod \"downloads-7954f5f757-l5vs7\" (UID: \"b3761f9e-65a2-4d35-8028-20f7492cea9a\") " pod="openshift-console/downloads-7954f5f757-l5vs7" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.912952 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/9bab245c-413d-4d13-949e-02e428400df4-default-certificate\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.933355 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b97g8\" (UniqueName: \"kubernetes.io/projected/600730b8-8dad-4391-bcf2-9fe5a4ca21b8-kube-api-access-b97g8\") pod \"machine-api-operator-5694c8668f-fgcjc\" (UID: \"600730b8-8dad-4391-bcf2-9fe5a4ca21b8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.937227 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.945687 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.953325 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.954902 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp48d\" (UniqueName: \"kubernetes.io/projected/ab415f9f-c934-4c71-9702-57efb1e510ff-kube-api-access-dp48d\") pod \"dns-operator-744455d44c-jvgm4\" (UID: \"ab415f9f-c934-4c71-9702-57efb1e510ff\") " pod="openshift-dns-operator/dns-operator-744455d44c-jvgm4" Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.970166 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:06 crc kubenswrapper[4769]: E1006 07:19:06.971097 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:07.471074604 +0000 UTC m=+143.995355751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:06 crc kubenswrapper[4769]: I1006 07:19:06.984794 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghrkz\" (UniqueName: \"kubernetes.io/projected/dfe17f06-5ce4-492b-b856-5b3b63c1b214-kube-api-access-ghrkz\") pod \"machine-config-controller-84d6567774-gn24g\" (UID: \"dfe17f06-5ce4-492b-b856-5b3b63c1b214\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.011246 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-jvgm4" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.012546 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9l6w\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-kube-api-access-v9l6w\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.032310 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4zp2\" (UniqueName: \"kubernetes.io/projected/d7f9172c-31c8-4e15-a42b-64cdba575bbd-kube-api-access-k4zp2\") pod \"dns-default-pfz99\" (UID: \"d7f9172c-31c8-4e15-a42b-64cdba575bbd\") " pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.033355 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jj6q\" (UniqueName: \"kubernetes.io/projected/0d67c599-54fe-4062-b4d4-b5319c3902be-kube-api-access-7jj6q\") pod \"machine-config-operator-74547568cd-k666r\" (UID: \"0d67c599-54fe-4062-b4d4-b5319c3902be\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.068478 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.074185 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:07 crc kubenswrapper[4769]: E1006 07:19:07.074682 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:07.574670345 +0000 UTC m=+144.098951492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.074729 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d86m2\" (UniqueName: \"kubernetes.io/projected/18c78bde-92f9-4e3a-998e-659f978d26a6-kube-api-access-d86m2\") pod \"olm-operator-6b444d44fb-bv9ks\" (UID: \"18c78bde-92f9-4e3a-998e-659f978d26a6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.084069 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cnwm9"] Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.084405 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb"] Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.088627 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-px9vm"] Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.100174 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtlpn\" (UniqueName: \"kubernetes.io/projected/30232ce1-b859-4773-b763-648fb3399f4d-kube-api-access-qtlpn\") pod \"catalog-operator-68c6474976-x4wxj\" (UID: \"30232ce1-b859-4773-b763-648fb3399f4d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.105700 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8599\" (UniqueName: \"kubernetes.io/projected/f81eb0e9-5f14-40e6-a457-af787a30fca4-kube-api-access-v8599\") pod \"csi-hostpathplugin-sdgpd\" (UID: \"f81eb0e9-5f14-40e6-a457-af787a30fca4\") " pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.117679 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.119157 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qsjc\" (UniqueName: \"kubernetes.io/projected/baa140a6-2fa8-4527-80c5-3362db13fb76-kube-api-access-7qsjc\") pod \"service-ca-9c57cc56f-f9jjc\" (UID: \"baa140a6-2fa8-4527-80c5-3362db13fb76\") " pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.134246 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l5vs7" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.139646 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9skc\" (UniqueName: \"kubernetes.io/projected/dc476882-04fd-4566-acb1-ed94a705c94f-kube-api-access-d9skc\") pod \"packageserver-d55dfcdfc-grgwk\" (UID: \"dc476882-04fd-4566-acb1-ed94a705c94f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.145485 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kn4s5"] Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.159298 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg876\" (UniqueName: \"kubernetes.io/projected/9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784-kube-api-access-xg876\") pod \"kube-storage-version-migrator-operator-b67b599dd-jw7gb\" (UID: \"9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.175169 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:07 crc kubenswrapper[4769]: E1006 07:19:07.175590 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:07.675575101 +0000 UTC m=+144.199856248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.176472 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kslg2\" (UniqueName: \"kubernetes.io/projected/340a9fe0-d7c5-4b36-868d-f70cef5dd2b2-kube-api-access-kslg2\") pod \"service-ca-operator-777779d784-lmbcj\" (UID: \"340a9fe0-d7c5-4b36-868d-f70cef5dd2b2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.190524 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.217794 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.218332 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzb9q\" (UniqueName: \"kubernetes.io/projected/582d0a7e-6f76-4662-b311-658438fa3dc1-kube-api-access-bzb9q\") pod \"ingress-canary-cstzc\" (UID: \"582d0a7e-6f76-4662-b311-658438fa3dc1\") " pod="openshift-ingress-canary/ingress-canary-cstzc" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.236769 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.237622 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgrj8\" (UniqueName: \"kubernetes.io/projected/687216bf-bfb2-4eed-b2ff-8b1bc02d3516-kube-api-access-zgrj8\") pod \"machine-config-server-8vq2f\" (UID: \"687216bf-bfb2-4eed-b2ff-8b1bc02d3516\") " pod="openshift-machine-config-operator/machine-config-server-8vq2f" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.242240 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ca5afb6-c0da-4409-a7c1-2209e8b279fa-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-m544s\" (UID: \"1ca5afb6-c0da-4409-a7c1-2209e8b279fa\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" Oct 06 07:19:07 crc kubenswrapper[4769]: W1006 07:19:07.255346 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc319d2c0_09fc_41ee_bef3_376730d981c8.slice/crio-5561f6e1c4e67764dbfb100602ace487e233cc663d12628f9157fdcc037ce456 WatchSource:0}: Error finding container 5561f6e1c4e67764dbfb100602ace487e233cc663d12628f9157fdcc037ce456: Status 404 returned error can't find the container with id 5561f6e1c4e67764dbfb100602ace487e233cc663d12628f9157fdcc037ce456 Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.272920 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.277408 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:07 crc kubenswrapper[4769]: E1006 07:19:07.277751 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:07.777739172 +0000 UTC m=+144.302020319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.280759 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.287736 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.297013 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5xd4\" (UniqueName: \"kubernetes.io/projected/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-kube-api-access-r5xd4\") pod \"marketplace-operator-79b997595-jm2xh\" (UID: \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\") " pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.301370 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnmsx\" (UniqueName: \"kubernetes.io/projected/5a1e66fd-7451-4984-a953-2bb28b2f4cf0-kube-api-access-hnmsx\") pod \"multus-admission-controller-857f4d67dd-rp2mj\" (UID: \"5a1e66fd-7451-4984-a953-2bb28b2f4cf0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-rp2mj" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.303494 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.305497 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgjkw\" (UniqueName: \"kubernetes.io/projected/39a765da-9609-4f66-8444-51c13efe3d3c-kube-api-access-bgjkw\") pod \"collect-profiles-29328915-spmcr\" (UID: \"39a765da-9609-4f66-8444-51c13efe3d3c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.326930 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.330265 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.331306 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkkd4\" (UniqueName: \"kubernetes.io/projected/500255b4-789b-43dd-ba43-067682532ae9-kube-api-access-zkkd4\") pod \"control-plane-machine-set-operator-78cbb6b69f-bstmm\" (UID: \"500255b4-789b-43dd-ba43-067682532ae9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.336916 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cstzc" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.345862 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8vq2f" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.367186 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.375173 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4czk4\" (UniqueName: \"kubernetes.io/projected/9bab245c-413d-4d13-949e-02e428400df4-kube-api-access-4czk4\") pod \"router-default-5444994796-xnztd\" (UID: \"9bab245c-413d-4d13-949e-02e428400df4\") " pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.378867 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:07 crc kubenswrapper[4769]: E1006 07:19:07.379189 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:07.879172512 +0000 UTC m=+144.403453659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.417545 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6"] Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.421013 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq"] Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.430224 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8f7sk"] Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.483274 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:07 crc kubenswrapper[4769]: E1006 07:19:07.483778 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:07.983612026 +0000 UTC m=+144.507893173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.498926 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.507779 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.565048 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-rp2mj" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.585584 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:07 crc kubenswrapper[4769]: E1006 07:19:07.587001 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:08.08698472 +0000 UTC m=+144.611265867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.596841 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t"] Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.597313 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.613362 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.688265 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:07 crc kubenswrapper[4769]: E1006 07:19:07.688612 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:08.188600495 +0000 UTC m=+144.712881642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.696936 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7"] Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.733763 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f"] Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.784187 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.790999 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:07 crc kubenswrapper[4769]: E1006 07:19:07.795888 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:08.295861238 +0000 UTC m=+144.820142385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.811717 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-fgcjc"] Oct 06 07:19:07 crc kubenswrapper[4769]: I1006 07:19:07.899562 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:07 crc kubenswrapper[4769]: E1006 07:19:07.920610 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:08.420592896 +0000 UTC m=+144.944874043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.003692 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" event={"ID":"178b093c-7638-4f43-99d9-04a4e4573dd1","Type":"ContainerStarted","Data":"78dd2800fe6316bdbd650d64763a45b2d08606597622c3de9a4be005a9fca8a6"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.005095 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:08 crc kubenswrapper[4769]: E1006 07:19:08.005494 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:08.505479666 +0000 UTC m=+145.029760813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.009494 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8vq2f" event={"ID":"687216bf-bfb2-4eed-b2ff-8b1bc02d3516","Type":"ContainerStarted","Data":"89eccaffc64dd0038bf4de277f6a482dc685c629a5577391931f7132e6facd5f"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.017776 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" event={"ID":"8045bc4d-cc45-45a6-a2fe-85107461350c","Type":"ContainerStarted","Data":"b733371f00d636cac596c281a7159dd466e6b3a2bc4625a7206caa64b1236b70"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.020910 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" podStartSLOduration=123.020867674 podStartE2EDuration="2m3.020867674s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:08.018290532 +0000 UTC m=+144.542571679" watchObservedRunningTime="2025-10-06 07:19:08.020867674 +0000 UTC m=+144.545148811" Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.048190 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cnwm9" event={"ID":"c319d2c0-09fc-41ee-bef3-376730d981c8","Type":"ContainerStarted","Data":"5561f6e1c4e67764dbfb100602ace487e233cc663d12628f9157fdcc037ce456"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.067749 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" event={"ID":"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b","Type":"ContainerStarted","Data":"27d5927bbd0101ae4b5bef2af7b8d70c39054e89bb34060b646832226c601eec"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.067803 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" event={"ID":"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b","Type":"ContainerStarted","Data":"1fc1a702dc4e67eb0c6b6cbd1d321722ca0345c247af5b93a069bff5e4169231"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.081236 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-s2fwl" event={"ID":"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef","Type":"ContainerStarted","Data":"a1530c220b7bcbdd050e63f5a31d15da475e8fb16193ef7cde055577a07ca9e9"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.081285 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-s2fwl" event={"ID":"a6ae74f0-a980-4a1e-9d57-8c2a879f53ef","Type":"ContainerStarted","Data":"f23e2a45cf3101f61da9260ff14e56fe58f8a7d50d6a09f2fca6ab41f36aa92f"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.083893 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.086264 4769 patch_prober.go:28] interesting pod/console-operator-58897d9998-s2fwl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.086307 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-s2fwl" podUID="a6ae74f0-a980-4a1e-9d57-8c2a879f53ef" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.087574 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" event={"ID":"8f45d32a-bf9f-4fb1-8999-ea280b0518a9","Type":"ContainerStarted","Data":"27066e721cdd81046285e6fe470e289f98221efd1f6695994ec8269869f6cec3"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.100094 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lsg5p" event={"ID":"7e16e210-5266-45ae-9f3d-c214c5c173a4","Type":"ContainerStarted","Data":"8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.106454 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:08 crc kubenswrapper[4769]: E1006 07:19:08.106780 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:08.606769643 +0000 UTC m=+145.131050790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.112378 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" event={"ID":"70104bc8-4b31-4151-8aae-ceae236c2359","Type":"ContainerStarted","Data":"159dcf8358e3201efe5234557869108cc7367f00e45a64160917d6d009303954"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.114664 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" event={"ID":"76308f77-d787-4795-91f3-15c13227c3e7","Type":"ContainerStarted","Data":"b3c044ebdb2992cc330a003305d19bdf5b877493590ebf82e5300e69fb87cef1"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.139592 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" event={"ID":"91429be1-1ad0-4685-af4a-2184431a1d9f","Type":"ContainerStarted","Data":"823d99702d0f40217499c28105e5cd51256701f4df43ea48e6b36fa4ce4223b5"} Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.151704 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.208541 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:08 crc kubenswrapper[4769]: E1006 07:19:08.212100 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:08.71208144 +0000 UTC m=+145.236362587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.257518 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-pkqrr" podStartSLOduration=123.257483503 podStartE2EDuration="2m3.257483503s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:08.23724679 +0000 UTC m=+144.761527937" watchObservedRunningTime="2025-10-06 07:19:08.257483503 +0000 UTC m=+144.781764650" Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.312914 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:08 crc kubenswrapper[4769]: E1006 07:19:08.314288 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:08.814275772 +0000 UTC m=+145.338556919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.359868 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2fc25" podStartSLOduration=124.359849689 podStartE2EDuration="2m4.359849689s" podCreationTimestamp="2025-10-06 07:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:08.347678051 +0000 UTC m=+144.871959198" watchObservedRunningTime="2025-10-06 07:19:08.359849689 +0000 UTC m=+144.884130836" Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.362195 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp"] Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.402793 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l5vs7"] Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.417287 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:08 crc kubenswrapper[4769]: E1006 07:19:08.418039 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:08.918012127 +0000 UTC m=+145.442293274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.432768 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-jvgm4"] Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.444825 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pfz99"] Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.520713 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:08 crc kubenswrapper[4769]: E1006 07:19:08.520997 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:09.020986239 +0000 UTC m=+145.545267376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.624276 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:08 crc kubenswrapper[4769]: E1006 07:19:08.624589 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:09.12457409 +0000 UTC m=+145.648855227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.725885 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:08 crc kubenswrapper[4769]: E1006 07:19:08.726692 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:09.226678069 +0000 UTC m=+145.750959216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.763756 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" podStartSLOduration=123.763740169 podStartE2EDuration="2m3.763740169s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:08.76232347 +0000 UTC m=+145.286604617" watchObservedRunningTime="2025-10-06 07:19:08.763740169 +0000 UTC m=+145.288021316" Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.827164 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:08 crc kubenswrapper[4769]: E1006 07:19:08.827904 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:09.327883923 +0000 UTC m=+145.852165070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.840730 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ftkq8" podStartSLOduration=123.84070024 podStartE2EDuration="2m3.84070024s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:08.83927533 +0000 UTC m=+145.363556477" watchObservedRunningTime="2025-10-06 07:19:08.84070024 +0000 UTC m=+145.364981397" Oct 06 07:19:08 crc kubenswrapper[4769]: I1006 07:19:08.929248 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:08 crc kubenswrapper[4769]: E1006 07:19:08.929686 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:09.429673903 +0000 UTC m=+145.953955050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.000380 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-lsg5p" podStartSLOduration=124.000361188 podStartE2EDuration="2m4.000361188s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:08.993194939 +0000 UTC m=+145.517476096" watchObservedRunningTime="2025-10-06 07:19:09.000361188 +0000 UTC m=+145.524642335" Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.030087 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:09 crc kubenswrapper[4769]: E1006 07:19:09.030203 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:09.530186118 +0000 UTC m=+146.054467265 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.030486 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:09 crc kubenswrapper[4769]: E1006 07:19:09.030736 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:09.530729323 +0000 UTC m=+146.055010470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.065870 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-s2fwl" podStartSLOduration=125.06584998 podStartE2EDuration="2m5.06584998s" podCreationTimestamp="2025-10-06 07:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:09.033931002 +0000 UTC m=+145.558212139" watchObservedRunningTime="2025-10-06 07:19:09.06584998 +0000 UTC m=+145.590131117" Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.076955 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" podStartSLOduration=124.076926848 podStartE2EDuration="2m4.076926848s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:09.076769833 +0000 UTC m=+145.601050980" watchObservedRunningTime="2025-10-06 07:19:09.076926848 +0000 UTC m=+145.601207995" Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.131487 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:09 crc kubenswrapper[4769]: E1006 07:19:09.131633 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:09.631597627 +0000 UTC m=+146.155878774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.132534 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:09 crc kubenswrapper[4769]: E1006 07:19:09.133065 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:09.633029758 +0000 UTC m=+146.157310905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.149063 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq" event={"ID":"6c267793-2072-487d-8d1b-2e962921ceee","Type":"ContainerStarted","Data":"f1b999884845a6225d3cd387ad0a4307d6a52706b1e119e2398ba6052e3145e5"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.150020 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" event={"ID":"91429be1-1ad0-4685-af4a-2184431a1d9f","Type":"ContainerStarted","Data":"62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.150716 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.154548 4769 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kn4s5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" start-of-body= Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.154718 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" podUID="91429be1-1ad0-4685-af4a-2184431a1d9f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.160081 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" event={"ID":"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe","Type":"ContainerStarted","Data":"857f40b8c6a6e2e17c2664b57704d3bee8bbfa689f3632a90d401caf9ae0a28f"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.163361 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" event={"ID":"8f45d32a-bf9f-4fb1-8999-ea280b0518a9","Type":"ContainerStarted","Data":"4ef62acb3a9aab81ec03f457d40c6cd80eba441c110fe1c4d0cd6774c5c5678b"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.164209 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pfz99" event={"ID":"d7f9172c-31c8-4e15-a42b-64cdba575bbd","Type":"ContainerStarted","Data":"0767a4f45b35aa9ad1a5fc8ac39408096e26ed6ec34e1d6c0bd0cb3b6bc7b946"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.180855 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cnwm9" event={"ID":"c319d2c0-09fc-41ee-bef3-376730d981c8","Type":"ContainerStarted","Data":"3992b334287e4988ecbe0ba50033fafd9889bbfe1c6c9ea53653502b89069384"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.180920 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cnwm9" event={"ID":"c319d2c0-09fc-41ee-bef3-376730d981c8","Type":"ContainerStarted","Data":"308dd79624345bea42d4f55651de9f35d747ea8106197404ce3a08749692cfad"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.207128 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" event={"ID":"b2efffc9-c8d1-4b57-bbc8-c2e2b096bf0b","Type":"ContainerStarted","Data":"f0045caa5277c9111685a7084505e22645dd695e1c2af5868dc380b7f37b2398"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.240509 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l5vs7" event={"ID":"b3761f9e-65a2-4d35-8028-20f7492cea9a","Type":"ContainerStarted","Data":"263b9be86adc72b764fc712364a50d51a8c6c6f5723016880d66f24b28b106d6"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.251279 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:09 crc kubenswrapper[4769]: E1006 07:19:09.251834 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:09.75181957 +0000 UTC m=+146.276100717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.274754 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jvgm4" event={"ID":"ab415f9f-c934-4c71-9702-57efb1e510ff","Type":"ContainerStarted","Data":"0cda3a16e1a1caade08fea0e30cdc3062f40a593c3c06953fc6e3d5e8f288492"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.294304 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xnztd" event={"ID":"9bab245c-413d-4d13-949e-02e428400df4","Type":"ContainerStarted","Data":"6ec294226fabb832b1fda4f8e27a6bf630f9dfe6a3dbea44e908b482dfd6bf0c"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.309351 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" podStartSLOduration=124.309322829 podStartE2EDuration="2m4.309322829s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:09.239825037 +0000 UTC m=+145.764106184" watchObservedRunningTime="2025-10-06 07:19:09.309322829 +0000 UTC m=+145.833603976" Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.310994 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-n8nmg" podStartSLOduration=124.310981025 podStartE2EDuration="2m4.310981025s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:09.310173683 +0000 UTC m=+145.834454830" watchObservedRunningTime="2025-10-06 07:19:09.310981025 +0000 UTC m=+145.835262172" Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.311196 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" event={"ID":"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc","Type":"ContainerStarted","Data":"195db54a0d76ad9c979e7000a300c3cb3bb549766f9464b2190d695724a2b59e"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.329699 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8vq2f" event={"ID":"687216bf-bfb2-4eed-b2ff-8b1bc02d3516","Type":"ContainerStarted","Data":"f786ef66ae488e04474cbdd2479cb710ba7bba7aae50f922806d5ded9ecb90c0"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.344811 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" event={"ID":"8045bc4d-cc45-45a6-a2fe-85107461350c","Type":"ContainerStarted","Data":"85a3821f66d749fdd335f77be24f0ead642fb99f434e88fb57e382042951f0a7"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.347624 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" event={"ID":"600730b8-8dad-4391-bcf2-9fe5a4ca21b8","Type":"ContainerStarted","Data":"1a05184e74ca487fa763710e9861e03feb41baf33d5eb47fd34c8e8f0cf9740a"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.350430 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" event={"ID":"178b093c-7638-4f43-99d9-04a4e4573dd1","Type":"ContainerStarted","Data":"1cb9dd975968418d569b39499879419c5b07331e246e4dc023c37d2605b9a9f2"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.353910 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:09 crc kubenswrapper[4769]: E1006 07:19:09.361779 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:09.861755478 +0000 UTC m=+146.386036625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.363767 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" event={"ID":"856dea71-a479-4575-86ce-6021c6e70e6d","Type":"ContainerStarted","Data":"5fd6e44c443bfce8cdbeaf41ae83b798d6ebfe298ccb893af3c29a609fe824d5"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.363821 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" event={"ID":"856dea71-a479-4575-86ce-6021c6e70e6d","Type":"ContainerStarted","Data":"5a82f8018904daf85eeb76baaac21be0afb6f2eff17c34fb8a02705eac39fc43"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.368749 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8vq2f" podStartSLOduration=5.368719211 podStartE2EDuration="5.368719211s" podCreationTimestamp="2025-10-06 07:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:09.364115692 +0000 UTC m=+145.888396839" watchObservedRunningTime="2025-10-06 07:19:09.368719211 +0000 UTC m=+145.893000358" Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.376131 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" event={"ID":"70104bc8-4b31-4151-8aae-ceae236c2359","Type":"ContainerStarted","Data":"f9c5ff618290c15ac81ce7b0b0c6a65b450b1545cfca14ff4e0a5f297470e521"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.398760 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tjdr6" podStartSLOduration=124.398741546 podStartE2EDuration="2m4.398741546s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:09.398091247 +0000 UTC m=+145.922372394" watchObservedRunningTime="2025-10-06 07:19:09.398741546 +0000 UTC m=+145.923022693" Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.421486 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jp7kb" event={"ID":"76308f77-d787-4795-91f3-15c13227c3e7","Type":"ContainerStarted","Data":"27788aacb9cdc510cbf673cbab9d4e7dd2d82bb66093fbb59c45574560446af7"} Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.422214 4769 patch_prober.go:28] interesting pod/console-operator-58897d9998-s2fwl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.422546 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-s2fwl" podUID="a6ae74f0-a980-4a1e-9d57-8c2a879f53ef" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.455214 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:09 crc kubenswrapper[4769]: E1006 07:19:09.456865 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:09.956845471 +0000 UTC m=+146.481126618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.478334 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-px9vm" podStartSLOduration=124.478314038 podStartE2EDuration="2m4.478314038s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:09.475959142 +0000 UTC m=+146.000240289" watchObservedRunningTime="2025-10-06 07:19:09.478314038 +0000 UTC m=+146.002595185" Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.557594 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:09 crc kubenswrapper[4769]: E1006 07:19:09.559595 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:10.059580348 +0000 UTC m=+146.583861575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.659602 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:09 crc kubenswrapper[4769]: E1006 07:19:09.659990 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:10.159975029 +0000 UTC m=+146.684256176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.667359 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.694364 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.763249 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:09 crc kubenswrapper[4769]: E1006 07:19:09.764711 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:10.26469047 +0000 UTC m=+146.788971617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.765386 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.770856 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f9jjc"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.820560 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-k666r"] Oct 06 07:19:09 crc kubenswrapper[4769]: W1006 07:19:09.833550 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaa140a6_2fa8_4527_80c5_3362db13fb76.slice/crio-2235efece91bfc8145be29157e72039a75517556f740530dede17b249e60242e WatchSource:0}: Error finding container 2235efece91bfc8145be29157e72039a75517556f740530dede17b249e60242e: Status 404 returned error can't find the container with id 2235efece91bfc8145be29157e72039a75517556f740530dede17b249e60242e Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.848105 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.868264 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-sdgpd"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.868321 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.870109 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:09 crc kubenswrapper[4769]: E1006 07:19:09.870675 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:10.370650047 +0000 UTC m=+146.894931194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.899543 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cstzc"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.902859 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.907048 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-rp2mj"] Oct 06 07:19:09 crc kubenswrapper[4769]: W1006 07:19:09.939291 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod582d0a7e_6f76_4662_b311_658438fa3dc1.slice/crio-3d97334c05aee6238c6697446f3427a23cd46837e7f21098c411abfb676aed6f WatchSource:0}: Error finding container 3d97334c05aee6238c6697446f3427a23cd46837e7f21098c411abfb676aed6f: Status 404 returned error can't find the container with id 3d97334c05aee6238c6697446f3427a23cd46837e7f21098c411abfb676aed6f Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.948596 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jm2xh"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.951111 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.955776 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.972718 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks"] Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.972989 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:09 crc kubenswrapper[4769]: E1006 07:19:09.973380 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:10.473364873 +0000 UTC m=+146.997646020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:09 crc kubenswrapper[4769]: I1006 07:19:09.988798 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb"] Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.073959 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:10 crc kubenswrapper[4769]: E1006 07:19:10.074072 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:10.574044793 +0000 UTC m=+147.098325940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.074219 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:10 crc kubenswrapper[4769]: E1006 07:19:10.074558 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:10.574548536 +0000 UTC m=+147.098829723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.175049 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:10 crc kubenswrapper[4769]: E1006 07:19:10.175750 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:10.67573239 +0000 UTC m=+147.200013537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.277803 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:10 crc kubenswrapper[4769]: E1006 07:19:10.278250 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:10.77823561 +0000 UTC m=+147.302516767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.379179 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:10 crc kubenswrapper[4769]: E1006 07:19:10.379789 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:10.879774413 +0000 UTC m=+147.404055560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.438022 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm" event={"ID":"500255b4-789b-43dd-ba43-067682532ae9","Type":"ContainerStarted","Data":"34e943552d6394d689c5f8c33771e83bd6449e1ba0f492a5dc4ea0d707d4efab"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.447754 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" event={"ID":"dc476882-04fd-4566-acb1-ed94a705c94f","Type":"ContainerStarted","Data":"bdc0160f6213027f62b1a487e37bbb4c7138d1fa1c833b36225a0cbd6799b266"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.447794 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" event={"ID":"dc476882-04fd-4566-acb1-ed94a705c94f","Type":"ContainerStarted","Data":"68819bb29aad1a37e577b54a5cb14d058f0372df01711e18135b94f0d195a46c"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.449559 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.450685 4769 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-grgwk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.450719 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" podUID="dc476882-04fd-4566-acb1-ed94a705c94f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.454681 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rp2mj" event={"ID":"5a1e66fd-7451-4984-a953-2bb28b2f4cf0","Type":"ContainerStarted","Data":"70fa5f25e22cfba9fd8339227aa7003e33c1400e160f3f96a7b48cf67b3ef796"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.461316 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cstzc" event={"ID":"582d0a7e-6f76-4662-b311-658438fa3dc1","Type":"ContainerStarted","Data":"60ff59549fc170e90bdd43cb2a0d8b8cbe474e20bddb52b22c429b090f79c926"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.461367 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cstzc" event={"ID":"582d0a7e-6f76-4662-b311-658438fa3dc1","Type":"ContainerStarted","Data":"3d97334c05aee6238c6697446f3427a23cd46837e7f21098c411abfb676aed6f"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.465945 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" event={"ID":"b5b1a836-7d3e-4e40-8147-b59b826c3885","Type":"ContainerStarted","Data":"f01a62a4938f9e11d5a662fcf280476a0809cbc5b3a81836d4cec8b91112e894"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.473259 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" event={"ID":"dfe17f06-5ce4-492b-b856-5b3b63c1b214","Type":"ContainerStarted","Data":"c1549dffb599aeb36bae9de907defc3313f6844d863f8702061d049744aa6cee"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.473298 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" event={"ID":"dfe17f06-5ce4-492b-b856-5b3b63c1b214","Type":"ContainerStarted","Data":"ec3eb5ec178fc68bcddb04f6b61510ed680902f8ea6ca0070c5172b34c96b056"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.474599 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" podStartSLOduration=125.474577109 podStartE2EDuration="2m5.474577109s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:10.469458567 +0000 UTC m=+146.993739714" watchObservedRunningTime="2025-10-06 07:19:10.474577109 +0000 UTC m=+146.998858256" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.484313 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:10 crc kubenswrapper[4769]: E1006 07:19:10.484639 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:10.984627678 +0000 UTC m=+147.508908825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.484986 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cstzc" podStartSLOduration=6.484969538 podStartE2EDuration="6.484969538s" podCreationTimestamp="2025-10-06 07:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:10.483848297 +0000 UTC m=+147.008129444" watchObservedRunningTime="2025-10-06 07:19:10.484969538 +0000 UTC m=+147.009250685" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.491880 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-xnztd" event={"ID":"9bab245c-413d-4d13-949e-02e428400df4","Type":"ContainerStarted","Data":"5aebab5049ef72733c65fb7086e05884e8eea7228ae623b93195b610c57cf500"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.494918 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" event={"ID":"9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784","Type":"ContainerStarted","Data":"ec07f6e29268b2da3bebc0e660c75e5c686c720f505a2a0ad9c3ce87b0899317"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.497313 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l5vs7" event={"ID":"b3761f9e-65a2-4d35-8028-20f7492cea9a","Type":"ContainerStarted","Data":"917e6f03716ebab0baad1130031bab569e8a9ae1b5271e344e6c4ec3ee4147d7"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.498572 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l5vs7" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.501687 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-l5vs7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.501730 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l5vs7" podUID="b3761f9e-65a2-4d35-8028-20f7492cea9a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.509115 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.516458 4769 patch_prober.go:28] interesting pod/router-default-5444994796-xnztd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.516720 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xnztd" podUID="9bab245c-413d-4d13-949e-02e428400df4" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.517708 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-xnztd" podStartSLOduration=125.517689518 podStartE2EDuration="2m5.517689518s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:10.516517746 +0000 UTC m=+147.040798913" watchObservedRunningTime="2025-10-06 07:19:10.517689518 +0000 UTC m=+147.041970665" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.529658 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jvgm4" event={"ID":"ab415f9f-c934-4c71-9702-57efb1e510ff","Type":"ContainerStarted","Data":"1f664548e7ea5c72cf57d853d461b952dead2d1dacdc1c80f70fbf62508fdc47"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.536960 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-l5vs7" podStartSLOduration=125.536942843 podStartE2EDuration="2m5.536942843s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:10.53321131 +0000 UTC m=+147.057492457" watchObservedRunningTime="2025-10-06 07:19:10.536942843 +0000 UTC m=+147.061223990" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.544971 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" event={"ID":"178b093c-7638-4f43-99d9-04a4e4573dd1","Type":"ContainerStarted","Data":"7c98c6164f7a3be4f600db6212cb12f8fda860a0aa6ee16c8350c188f95c632a"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.545878 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.547981 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" event={"ID":"0d67c599-54fe-4062-b4d4-b5319c3902be","Type":"ContainerStarted","Data":"311ac6bccc6ba18b6fc0579995ac658a2215e29f687cce777a95cce6fb3ebcdf"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.548016 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" event={"ID":"0d67c599-54fe-4062-b4d4-b5319c3902be","Type":"ContainerStarted","Data":"3401e29834630a29dbad419b1504a77e6d810f13d55bb92ef6f49b5d65e5d56a"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.550319 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" event={"ID":"39a765da-9609-4f66-8444-51c13efe3d3c","Type":"ContainerStarted","Data":"2c7bdc629b5ec2fea738a0e706302e874457727865f10fad60e1244886db8dca"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.564852 4769 generic.go:334] "Generic (PLEG): container finished" podID="8f45d32a-bf9f-4fb1-8999-ea280b0518a9" containerID="4ef62acb3a9aab81ec03f457d40c6cd80eba441c110fe1c4d0cd6774c5c5678b" exitCode=0 Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.565090 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" event={"ID":"8f45d32a-bf9f-4fb1-8999-ea280b0518a9","Type":"ContainerDied","Data":"4ef62acb3a9aab81ec03f457d40c6cd80eba441c110fe1c4d0cd6774c5c5678b"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.569178 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" podStartSLOduration=125.56916444 podStartE2EDuration="2m5.56916444s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:10.567797811 +0000 UTC m=+147.092078958" watchObservedRunningTime="2025-10-06 07:19:10.56916444 +0000 UTC m=+147.093445577" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.586122 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" event={"ID":"340a9fe0-d7c5-4b36-868d-f70cef5dd2b2","Type":"ContainerStarted","Data":"c22c694bc41eb2b2d4b19c5d78f6f064e0ced9d0bbb9198366ba5207e514babd"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.587311 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:10 crc kubenswrapper[4769]: E1006 07:19:10.587611 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.087594871 +0000 UTC m=+147.611876018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.587817 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:10 crc kubenswrapper[4769]: E1006 07:19:10.588937 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.088921069 +0000 UTC m=+147.613202216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.589248 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" event={"ID":"18c78bde-92f9-4e3a-998e-659f978d26a6","Type":"ContainerStarted","Data":"17aaaf0b1079b4755469832e60aa20ab947a345959f7b4596d4ebe4868550771"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.589930 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.591070 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" event={"ID":"1ca5afb6-c0da-4409-a7c1-2209e8b279fa","Type":"ContainerStarted","Data":"4bcc574f34081eb379eb7deafa3ab2d58c8e2550d67253e7d3025951fcc5012f"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.595661 4769 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-bv9ks container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.595717 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" podUID="18c78bde-92f9-4e3a-998e-659f978d26a6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.613863 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" event={"ID":"30232ce1-b859-4773-b763-648fb3399f4d","Type":"ContainerStarted","Data":"a6e2277e8f4dc20a0003e286b0175cb99fe263f97412047101ac5208a7dcb733"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.618572 4769 generic.go:334] "Generic (PLEG): container finished" podID="c0366b1e-5791-4b3c-955d-4adbeb2f9ccc" containerID="9ce0a985a16e825564906a0a6649b0bb6f6cee582516679d6905cab66d52c069" exitCode=0 Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.618650 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" event={"ID":"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc","Type":"ContainerDied","Data":"9ce0a985a16e825564906a0a6649b0bb6f6cee582516679d6905cab66d52c069"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.620794 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" event={"ID":"f81eb0e9-5f14-40e6-a457-af787a30fca4","Type":"ContainerStarted","Data":"dd6b74376b07666ca6cfc4a066d7543a353e7f3ccad210f3e4e880b97195e851"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.631324 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pfz99" event={"ID":"d7f9172c-31c8-4e15-a42b-64cdba575bbd","Type":"ContainerStarted","Data":"efd49d31b89ef753be2d0cd16037d69b73062402bb20d9baa8196613b54ef6e3"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.639125 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" podStartSLOduration=125.639112213 podStartE2EDuration="2m5.639112213s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:10.634787694 +0000 UTC m=+147.159068851" watchObservedRunningTime="2025-10-06 07:19:10.639112213 +0000 UTC m=+147.163393350" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.644881 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq" event={"ID":"6c267793-2072-487d-8d1b-2e962921ceee","Type":"ContainerStarted","Data":"cbbba657d56647066f53fd14be999f3d956963f3055313c016da1a4c15493040"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.644937 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq" event={"ID":"6c267793-2072-487d-8d1b-2e962921ceee","Type":"ContainerStarted","Data":"5682ca1ceb687b51d0878d2c24a8ada0db7f40cdb56d459f7be56d72e4615f00"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.649403 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" event={"ID":"600730b8-8dad-4391-bcf2-9fe5a4ca21b8","Type":"ContainerStarted","Data":"638bdb9435243e59221ca9ebbfbbc2ce142424bd29889ef14a983cb4a74d0e03"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.649449 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" event={"ID":"600730b8-8dad-4391-bcf2-9fe5a4ca21b8","Type":"ContainerStarted","Data":"ce8777f136450c3adb4d5ca517478708f2aaa56ead328dc5522549d403d03c5e"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.659119 4769 generic.go:334] "Generic (PLEG): container finished" podID="856dea71-a479-4575-86ce-6021c6e70e6d" containerID="5fd6e44c443bfce8cdbeaf41ae83b798d6ebfe298ccb893af3c29a609fe824d5" exitCode=0 Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.659169 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" event={"ID":"856dea71-a479-4575-86ce-6021c6e70e6d","Type":"ContainerDied","Data":"5fd6e44c443bfce8cdbeaf41ae83b798d6ebfe298ccb893af3c29a609fe824d5"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.664750 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" event={"ID":"baa140a6-2fa8-4527-80c5-3362db13fb76","Type":"ContainerStarted","Data":"beb113d62aa7e051102611b5d90ec331f2bff0952cfac2a4da53b61096d05b3d"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.664781 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" event={"ID":"baa140a6-2fa8-4527-80c5-3362db13fb76","Type":"ContainerStarted","Data":"2235efece91bfc8145be29157e72039a75517556f740530dede17b249e60242e"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.668059 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" event={"ID":"6f258e1d-9ba8-4d8f-bfbb-8373f4ea3fbe","Type":"ContainerStarted","Data":"acf098d32387899c5a9448b33364a532be280adbac0e05f602946cd21b91ddd7"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.673950 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" event={"ID":"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa","Type":"ContainerStarted","Data":"b745eb3417fcd7d8829ef5f36015e71ed1258f95e76254f53fbe25c3baab486e"} Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.676212 4769 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kn4s5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" start-of-body= Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.676245 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" podUID="91429be1-1ad0-4685-af4a-2184431a1d9f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.682846 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-c5rcq" podStartSLOduration=126.6828328 podStartE2EDuration="2m6.6828328s" podCreationTimestamp="2025-10-06 07:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:10.681725419 +0000 UTC m=+147.206006566" watchObservedRunningTime="2025-10-06 07:19:10.6828328 +0000 UTC m=+147.207113947" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.690630 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:10 crc kubenswrapper[4769]: E1006 07:19:10.691814 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.191799999 +0000 UTC m=+147.716081146 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.700404 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-fgcjc" podStartSLOduration=125.700389397 podStartE2EDuration="2m5.700389397s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:10.698719312 +0000 UTC m=+147.223000459" watchObservedRunningTime="2025-10-06 07:19:10.700389397 +0000 UTC m=+147.224670544" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.756262 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-457g7" podStartSLOduration=125.756247111 podStartE2EDuration="2m5.756247111s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:10.754578135 +0000 UTC m=+147.278859282" watchObservedRunningTime="2025-10-06 07:19:10.756247111 +0000 UTC m=+147.280528258" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.797437 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:10 crc kubenswrapper[4769]: E1006 07:19:10.803390 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.303371071 +0000 UTC m=+147.827652208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.813143 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cnwm9" podStartSLOduration=125.813122443 podStartE2EDuration="2m5.813122443s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:10.809669396 +0000 UTC m=+147.333950543" watchObservedRunningTime="2025-10-06 07:19:10.813122443 +0000 UTC m=+147.337403580" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.813320 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-f9jjc" podStartSLOduration=125.813314658 podStartE2EDuration="2m5.813314658s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:10.785442883 +0000 UTC m=+147.309724030" watchObservedRunningTime="2025-10-06 07:19:10.813314658 +0000 UTC m=+147.337595805" Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.900407 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:10 crc kubenswrapper[4769]: E1006 07:19:10.900571 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.400554573 +0000 UTC m=+147.924835710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:10 crc kubenswrapper[4769]: I1006 07:19:10.900655 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:10 crc kubenswrapper[4769]: E1006 07:19:10.901058 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.401039317 +0000 UTC m=+147.925320514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.001257 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.001470 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.501435908 +0000 UTC m=+148.025717055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.001901 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.002355 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.502330113 +0000 UTC m=+148.026611360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.103032 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.103191 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.603163047 +0000 UTC m=+148.127444194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.103402 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.103761 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.603742103 +0000 UTC m=+148.128023290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.204853 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.205125 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.705083621 +0000 UTC m=+148.229364768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.205364 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.205935 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.705918134 +0000 UTC m=+148.230199281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.306398 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.306547 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.806529602 +0000 UTC m=+148.330810759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.306619 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.307224 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.80719813 +0000 UTC m=+148.331479457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.407382 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.407584 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.907553311 +0000 UTC m=+148.431834478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.407868 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.408300 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:11.908289541 +0000 UTC m=+148.432570758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.509143 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.509233 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.009215137 +0000 UTC m=+148.533496274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.509576 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.509860 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.009852284 +0000 UTC m=+148.534133431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.516193 4769 patch_prober.go:28] interesting pod/router-default-5444994796-xnztd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 07:19:11 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Oct 06 07:19:11 crc kubenswrapper[4769]: [+]process-running ok Oct 06 07:19:11 crc kubenswrapper[4769]: healthz check failed Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.516270 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xnztd" podUID="9bab245c-413d-4d13-949e-02e428400df4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.611259 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.611480 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.11145581 +0000 UTC m=+148.635736957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.611964 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.612269 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.112256682 +0000 UTC m=+148.636537829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.681184 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pfz99" event={"ID":"d7f9172c-31c8-4e15-a42b-64cdba575bbd","Type":"ContainerStarted","Data":"c1d50dfcf131835ed7ff5f423b9a494ba287a897d1aa49e1b640067506c9d4f0"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.681352 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.684558 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" event={"ID":"0d67c599-54fe-4062-b4d4-b5319c3902be","Type":"ContainerStarted","Data":"aed002a45167491d1f6f9127146439aa67dd47f68dde47f344b6a00abe2b63fb"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.686740 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" event={"ID":"39a765da-9609-4f66-8444-51c13efe3d3c","Type":"ContainerStarted","Data":"7c3bf6060476df19db1dfd0e9b3c68f0205bc91a69d80093edf7e3d01204e6e1"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.688851 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" event={"ID":"340a9fe0-d7c5-4b36-868d-f70cef5dd2b2","Type":"ContainerStarted","Data":"6cf41922a98994323ce75f8464fa599a6b11796014fc673088e0bb5027a245f9"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.692899 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" event={"ID":"8f45d32a-bf9f-4fb1-8999-ea280b0518a9","Type":"ContainerStarted","Data":"a4197c95d6399fd617025f412b087672569e45d40dab5cf1b7f4e1b16a1abb3f"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.692936 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" event={"ID":"8f45d32a-bf9f-4fb1-8999-ea280b0518a9","Type":"ContainerStarted","Data":"bd77e942987dd748145c1fd660c39fc4b9b59ef0497066d1b95842282daf006d"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.694809 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" event={"ID":"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa","Type":"ContainerStarted","Data":"44ded000e408a808794358baf6c59192039ab2b39297b20a4ab81d9115163b08"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.695061 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.696303 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" event={"ID":"18c78bde-92f9-4e3a-998e-659f978d26a6","Type":"ContainerStarted","Data":"7f224031016307d17e5e8ccac1cef125a88e71b93220080b0343b4efac58b1c0"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.697116 4769 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-bv9ks container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.697151 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" podUID="18c78bde-92f9-4e3a-998e-659f978d26a6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.697340 4769 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jm2xh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.697383 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" podUID="dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.699176 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" event={"ID":"c0366b1e-5791-4b3c-955d-4adbeb2f9ccc","Type":"ContainerStarted","Data":"3595e6cd5050fe0f2189588aaecd50e8b5faf7b1c441299a8444008234ea1c19"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.703062 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rp2mj" event={"ID":"5a1e66fd-7451-4984-a953-2bb28b2f4cf0","Type":"ContainerStarted","Data":"f23ab2c8ffff6c62b9db78a961d0e2e53c5d41ea82bf897c147a4db92e6e8f47"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.703101 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-rp2mj" event={"ID":"5a1e66fd-7451-4984-a953-2bb28b2f4cf0","Type":"ContainerStarted","Data":"b85b9c6cf248c405f0e7e0efa6e963ab6e2c923a1f077e8e31136bf0763f7656"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.705680 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-jvgm4" event={"ID":"ab415f9f-c934-4c71-9702-57efb1e510ff","Type":"ContainerStarted","Data":"8b74a4b6fd26f84791753bb30db042f50c51ff08556af871e4bd899c1171d307"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.708126 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" event={"ID":"1ca5afb6-c0da-4409-a7c1-2209e8b279fa","Type":"ContainerStarted","Data":"0aa6b0ca488c528d1b535a821aab19ac7a85d88d51b9ad21b167061aa6ac653b"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.709412 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" event={"ID":"30232ce1-b859-4773-b763-648fb3399f4d","Type":"ContainerStarted","Data":"89539a4872ad3f2730994bbae8da2f3efa87dea75108c448794e208a534b4bf8"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.709649 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.711184 4769 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-x4wxj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.711222 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" podUID="30232ce1-b859-4773-b763-648fb3399f4d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.711450 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" event={"ID":"856dea71-a479-4575-86ce-6021c6e70e6d","Type":"ContainerStarted","Data":"f9b04c5eb8137b1c54f0242c2cc13a3c1ef4df0856a6fcc9009f832b195ef953"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.711544 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.712730 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" event={"ID":"dfe17f06-5ce4-492b-b856-5b3b63c1b214","Type":"ContainerStarted","Data":"44e02e325ca391d2f489515d0d5e89b2344086695bb62c55ab5b74d7fc1cc8c1"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.712820 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.712893 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.21287505 +0000 UTC m=+148.737156197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.713102 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.713440 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.213432235 +0000 UTC m=+148.737713382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.713978 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" event={"ID":"b5b1a836-7d3e-4e40-8147-b59b826c3885","Type":"ContainerStarted","Data":"3a7500bea481ebaf0cb68512703995fa9d4470dad1566350e9d679fc111a63bd"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.719006 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" event={"ID":"9c3b3d7f-34f3-4ed3-905e-b2cf4c25c784","Type":"ContainerStarted","Data":"25c0e75966fa4dee9b5dc4e688a0c01965253cb92741752715a0ac69a8b5725c"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.721733 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-pfz99" podStartSLOduration=7.721716116 podStartE2EDuration="7.721716116s" podCreationTimestamp="2025-10-06 07:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:11.700540026 +0000 UTC m=+148.224821173" watchObservedRunningTime="2025-10-06 07:19:11.721716116 +0000 UTC m=+148.245997263" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.724154 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" podStartSLOduration=126.724142273 podStartE2EDuration="2m6.724142273s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:11.72079746 +0000 UTC m=+148.245078607" watchObservedRunningTime="2025-10-06 07:19:11.724142273 +0000 UTC m=+148.248423420" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.728763 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm" event={"ID":"500255b4-789b-43dd-ba43-067682532ae9","Type":"ContainerStarted","Data":"2233d6794a419babd880c2410e95a313243d31a318aacb4b5dfd14e744d9568e"} Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.730109 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-l5vs7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.730154 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l5vs7" podUID="b3761f9e-65a2-4d35-8028-20f7492cea9a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.730444 4769 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-grgwk container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.730512 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" podUID="dc476882-04fd-4566-acb1-ed94a705c94f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.768060 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-k666r" podStartSLOduration=126.768041933 podStartE2EDuration="2m6.768041933s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:11.745172288 +0000 UTC m=+148.269453445" watchObservedRunningTime="2025-10-06 07:19:11.768041933 +0000 UTC m=+148.292323100" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.768913 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" podStartSLOduration=126.768908218 podStartE2EDuration="2m6.768908218s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:11.767459517 +0000 UTC m=+148.291740664" watchObservedRunningTime="2025-10-06 07:19:11.768908218 +0000 UTC m=+148.293189365" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.813437 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" podStartSLOduration=126.813407705 podStartE2EDuration="2m6.813407705s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:11.793135841 +0000 UTC m=+148.317416988" watchObservedRunningTime="2025-10-06 07:19:11.813407705 +0000 UTC m=+148.337688852" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.813737 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.814374 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lmbcj" podStartSLOduration=126.814369351 podStartE2EDuration="2m6.814369351s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:11.812093378 +0000 UTC m=+148.336374525" watchObservedRunningTime="2025-10-06 07:19:11.814369351 +0000 UTC m=+148.338650498" Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.814778 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.314756062 +0000 UTC m=+148.839037209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.877494 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" podStartSLOduration=126.877475467 podStartE2EDuration="2m6.877475467s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:11.854917428 +0000 UTC m=+148.379198575" watchObservedRunningTime="2025-10-06 07:19:11.877475467 +0000 UTC m=+148.401756614" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.910182 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" podStartSLOduration=127.910164335 podStartE2EDuration="2m7.910164335s" podCreationTimestamp="2025-10-06 07:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:11.909681092 +0000 UTC m=+148.433962239" watchObservedRunningTime="2025-10-06 07:19:11.910164335 +0000 UTC m=+148.434445472" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.911699 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-jvgm4" podStartSLOduration=126.911692948 podStartE2EDuration="2m6.911692948s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:11.879258806 +0000 UTC m=+148.403539953" watchObservedRunningTime="2025-10-06 07:19:11.911692948 +0000 UTC m=+148.435974095" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.918657 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:11 crc kubenswrapper[4769]: E1006 07:19:11.919807 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.419791383 +0000 UTC m=+148.944072530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.940903 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-jw7gb" podStartSLOduration=126.9408896 podStartE2EDuration="2m6.9408896s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:11.938328708 +0000 UTC m=+148.462609855" watchObservedRunningTime="2025-10-06 07:19:11.9408896 +0000 UTC m=+148.465170737" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.955512 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.955853 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.966748 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-m544s" podStartSLOduration=126.966733368 podStartE2EDuration="2m6.966733368s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:11.965298749 +0000 UTC m=+148.489579906" watchObservedRunningTime="2025-10-06 07:19:11.966733368 +0000 UTC m=+148.491014515" Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.977373 4769 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-zhqnp container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Oct 06 07:19:11 crc kubenswrapper[4769]: I1006 07:19:11.977442 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" podUID="c0366b1e-5791-4b3c-955d-4adbeb2f9ccc" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.018851 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" podStartSLOduration=127.018830837 podStartE2EDuration="2m7.018830837s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:12.01857832 +0000 UTC m=+148.542859467" watchObservedRunningTime="2025-10-06 07:19:12.018830837 +0000 UTC m=+148.543111984" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.019150 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-w2sjg" podStartSLOduration=127.019117815 podStartE2EDuration="2m7.019117815s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:11.987950898 +0000 UTC m=+148.512232045" watchObservedRunningTime="2025-10-06 07:19:12.019117815 +0000 UTC m=+148.543398962" Oct 06 07:19:12 crc kubenswrapper[4769]: E1006 07:19:12.021123 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.52110334 +0000 UTC m=+149.045384487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.021154 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.021474 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:12 crc kubenswrapper[4769]: E1006 07:19:12.021828 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.52181752 +0000 UTC m=+149.046098667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.054179 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-gn24g" podStartSLOduration=127.053964985 podStartE2EDuration="2m7.053964985s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:12.033762152 +0000 UTC m=+148.558043299" watchObservedRunningTime="2025-10-06 07:19:12.053964985 +0000 UTC m=+148.578246132" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.054581 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bstmm" podStartSLOduration=127.054574021 podStartE2EDuration="2m7.054574021s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:12.049258313 +0000 UTC m=+148.573539460" watchObservedRunningTime="2025-10-06 07:19:12.054574021 +0000 UTC m=+148.578855168" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.079735 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-rp2mj" podStartSLOduration=127.079721651 podStartE2EDuration="2m7.079721651s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:12.078524587 +0000 UTC m=+148.602805734" watchObservedRunningTime="2025-10-06 07:19:12.079721651 +0000 UTC m=+148.604002798" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.123074 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:12 crc kubenswrapper[4769]: E1006 07:19:12.123323 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.623290451 +0000 UTC m=+149.147571598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.123732 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:12 crc kubenswrapper[4769]: E1006 07:19:12.124054 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.624042433 +0000 UTC m=+149.148323570 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.225100 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:12 crc kubenswrapper[4769]: E1006 07:19:12.225472 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.725457283 +0000 UTC m=+149.249738430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.326438 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:12 crc kubenswrapper[4769]: E1006 07:19:12.326777 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.8267657 +0000 UTC m=+149.351046847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.429911 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.430349 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.430388 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.430407 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.430470 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:19:12 crc kubenswrapper[4769]: E1006 07:19:12.431291 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:12.931260205 +0000 UTC m=+149.455541352 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.436913 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.438539 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.439008 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.440386 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.513946 4769 patch_prober.go:28] interesting pod/router-default-5444994796-xnztd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 07:19:12 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Oct 06 07:19:12 crc kubenswrapper[4769]: [+]process-running ok Oct 06 07:19:12 crc kubenswrapper[4769]: healthz check failed Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.514000 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xnztd" podUID="9bab245c-413d-4d13-949e-02e428400df4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.531316 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:12 crc kubenswrapper[4769]: E1006 07:19:12.531704 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.031689457 +0000 UTC m=+149.555970604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.590178 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.606655 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.621918 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.634800 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:12 crc kubenswrapper[4769]: E1006 07:19:12.635034 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.135020151 +0000 UTC m=+149.659301298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.736031 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:12 crc kubenswrapper[4769]: E1006 07:19:12.736299 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.236286006 +0000 UTC m=+149.760567153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.749311 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" event={"ID":"f81eb0e9-5f14-40e6-a457-af787a30fca4","Type":"ContainerStarted","Data":"a63b112e2e7cdf9e63b60704141087dedfe4ba354c3d0971f7f599052a1b4f73"} Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.749980 4769 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-jm2xh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.750016 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-l5vs7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.750025 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" podUID="dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.750050 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l5vs7" podUID="b3761f9e-65a2-4d35-8028-20f7492cea9a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.758716 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x4wxj" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.825743 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bv9ks" Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.859517 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:12 crc kubenswrapper[4769]: E1006 07:19:12.861621 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.361594621 +0000 UTC m=+149.885875768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:12 crc kubenswrapper[4769]: I1006 07:19:12.962096 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:12 crc kubenswrapper[4769]: E1006 07:19:12.962503 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.462489916 +0000 UTC m=+149.986771063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.063558 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.064359 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.564341668 +0000 UTC m=+150.088622815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.169153 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.169640 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.669625176 +0000 UTC m=+150.193906333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.270416 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.270673 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.770657375 +0000 UTC m=+150.294938532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.270742 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.271005 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.770998874 +0000 UTC m=+150.295280021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.329706 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-grgwk" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.371652 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.371867 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.871806717 +0000 UTC m=+150.396087864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.372192 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.372491 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.872480066 +0000 UTC m=+150.396761213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.474292 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.474482 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.974460142 +0000 UTC m=+150.498741289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.474777 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.475181 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:13.975164341 +0000 UTC m=+150.499445478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.511663 4769 patch_prober.go:28] interesting pod/router-default-5444994796-xnztd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 07:19:13 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Oct 06 07:19:13 crc kubenswrapper[4769]: [+]process-running ok Oct 06 07:19:13 crc kubenswrapper[4769]: healthz check failed Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.511726 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xnztd" podUID="9bab245c-413d-4d13-949e-02e428400df4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.575691 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.575934 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.075912803 +0000 UTC m=+150.600193960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.576264 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.576551 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.07653907 +0000 UTC m=+150.600820217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.676901 4769 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.677192 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.677351 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.177332282 +0000 UTC m=+150.701613429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.677455 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.677776 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.177769314 +0000 UTC m=+150.702050461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.750909 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.751549 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.755151 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.756195 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.759095 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7e5c37535f566bf648ea21ca721aed7efb97f5f4fd03296a304fccb33a112761"} Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.759135 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f9a71fbe6d35cb20921ffd23252ab653c137375d80103194c51a6c1a44103bc8"} Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.759310 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.760977 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" event={"ID":"f81eb0e9-5f14-40e6-a457-af787a30fca4","Type":"ContainerStarted","Data":"e53138c78caef160a1296cef9d5f4261be48e72f743eda8cee170db15c930bce"} Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.761003 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" event={"ID":"f81eb0e9-5f14-40e6-a457-af787a30fca4","Type":"ContainerStarted","Data":"1bbca8431ab5d3df6fc266881b734f428f65aca31c380543593a797e14faaa4a"} Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.762932 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a29dbec5c6599d979b6496cb48bee78d6f95d1109c78960ecfda9dde6db2e43b"} Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.762959 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7adb1b6a2aeb71cd82d47439e62b0de0c586c62752a3782fe287f58c9e65a59f"} Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.762971 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.770176 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cba9a10a039e664c0aa673ab59f608bf13f2513db3754320680e1ed6089ffd84"} Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.770204 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ee5fbe34b7b9f4bdbe36a9b06a485eda7d6581f83a8c714ef4f3cf995198adbd"} Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.771471 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.779140 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.779530 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.279514633 +0000 UTC m=+150.803795780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.880787 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/560bbc8b-e88c-4763-911f-1cbe20773590-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"560bbc8b-e88c-4763-911f-1cbe20773590\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.880842 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.880915 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/560bbc8b-e88c-4763-911f-1cbe20773590-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"560bbc8b-e88c-4763-911f-1cbe20773590\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.882258 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.38224201 +0000 UTC m=+150.906523147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.982322 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.982455 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.482436046 +0000 UTC m=+151.006717193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.984204 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/560bbc8b-e88c-4763-911f-1cbe20773590-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"560bbc8b-e88c-4763-911f-1cbe20773590\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.984312 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/560bbc8b-e88c-4763-911f-1cbe20773590-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"560bbc8b-e88c-4763-911f-1cbe20773590\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.984341 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:13 crc kubenswrapper[4769]: I1006 07:19:13.984547 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/560bbc8b-e88c-4763-911f-1cbe20773590-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"560bbc8b-e88c-4763-911f-1cbe20773590\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 07:19:13 crc kubenswrapper[4769]: E1006 07:19:13.984661 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.484653048 +0000 UTC m=+151.008934195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.005090 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/560bbc8b-e88c-4763-911f-1cbe20773590-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"560bbc8b-e88c-4763-911f-1cbe20773590\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.042275 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7ckl4"] Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.043192 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.049980 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.060387 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ckl4"] Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.076776 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.085306 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:14 crc kubenswrapper[4769]: E1006 07:19:14.085531 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.585511312 +0000 UTC m=+151.109792459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.085667 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:14 crc kubenswrapper[4769]: E1006 07:19:14.085959 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.585950324 +0000 UTC m=+151.110231471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.190743 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.190994 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-utilities\") pod \"community-operators-7ckl4\" (UID: \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\") " pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.191057 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-catalog-content\") pod \"community-operators-7ckl4\" (UID: \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\") " pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.191081 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6zm5\" (UniqueName: \"kubernetes.io/projected/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-kube-api-access-r6zm5\") pod \"community-operators-7ckl4\" (UID: \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\") " pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:19:14 crc kubenswrapper[4769]: E1006 07:19:14.191189 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.691170879 +0000 UTC m=+151.215452026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.246093 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tgg5p"] Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.247912 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.249884 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.257093 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgg5p"] Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.292065 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.292119 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-catalog-content\") pod \"community-operators-7ckl4\" (UID: \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\") " pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.292145 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6zm5\" (UniqueName: \"kubernetes.io/projected/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-kube-api-access-r6zm5\") pod \"community-operators-7ckl4\" (UID: \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\") " pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.292185 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-utilities\") pod \"community-operators-7ckl4\" (UID: \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\") " pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.292622 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-utilities\") pod \"community-operators-7ckl4\" (UID: \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\") " pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.292887 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-catalog-content\") pod \"community-operators-7ckl4\" (UID: \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\") " pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:19:14 crc kubenswrapper[4769]: E1006 07:19:14.293120 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.793108354 +0000 UTC m=+151.317389501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.319717 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.331835 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6zm5\" (UniqueName: \"kubernetes.io/projected/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-kube-api-access-r6zm5\") pod \"community-operators-7ckl4\" (UID: \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\") " pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:19:14 crc kubenswrapper[4769]: W1006 07:19:14.340127 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod560bbc8b_e88c_4763_911f_1cbe20773590.slice/crio-d5d6f502acb9cc12e872a84489951d3d3d38cb796b0588bce1cea1c5a39130a9 WatchSource:0}: Error finding container d5d6f502acb9cc12e872a84489951d3d3d38cb796b0588bce1cea1c5a39130a9: Status 404 returned error can't find the container with id d5d6f502acb9cc12e872a84489951d3d3d38cb796b0588bce1cea1c5a39130a9 Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.352416 4769 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-06T07:19:13.676933741Z","Handler":null,"Name":""} Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.358932 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.393010 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:14 crc kubenswrapper[4769]: E1006 07:19:14.393274 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.893233058 +0000 UTC m=+151.417514205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.393497 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.393626 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32abbcb8-969c-48c1-a041-c8dca1d04e24-catalog-content\") pod \"certified-operators-tgg5p\" (UID: \"32abbcb8-969c-48c1-a041-c8dca1d04e24\") " pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.393739 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32abbcb8-969c-48c1-a041-c8dca1d04e24-utilities\") pod \"certified-operators-tgg5p\" (UID: \"32abbcb8-969c-48c1-a041-c8dca1d04e24\") " pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.393871 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrrc8\" (UniqueName: \"kubernetes.io/projected/32abbcb8-969c-48c1-a041-c8dca1d04e24-kube-api-access-vrrc8\") pod \"certified-operators-tgg5p\" (UID: \"32abbcb8-969c-48c1-a041-c8dca1d04e24\") " pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:14 crc kubenswrapper[4769]: E1006 07:19:14.393922 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-06 07:19:14.893914237 +0000 UTC m=+151.418195374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-h7lhw" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.436608 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g67x9"] Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.437575 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.454032 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g67x9"] Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.475584 4769 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.475615 4769 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.497039 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.497357 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrrc8\" (UniqueName: \"kubernetes.io/projected/32abbcb8-969c-48c1-a041-c8dca1d04e24-kube-api-access-vrrc8\") pod \"certified-operators-tgg5p\" (UID: \"32abbcb8-969c-48c1-a041-c8dca1d04e24\") " pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.497447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32abbcb8-969c-48c1-a041-c8dca1d04e24-catalog-content\") pod \"certified-operators-tgg5p\" (UID: \"32abbcb8-969c-48c1-a041-c8dca1d04e24\") " pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.497497 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32abbcb8-969c-48c1-a041-c8dca1d04e24-utilities\") pod \"certified-operators-tgg5p\" (UID: \"32abbcb8-969c-48c1-a041-c8dca1d04e24\") " pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.498060 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32abbcb8-969c-48c1-a041-c8dca1d04e24-utilities\") pod \"certified-operators-tgg5p\" (UID: \"32abbcb8-969c-48c1-a041-c8dca1d04e24\") " pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.499497 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32abbcb8-969c-48c1-a041-c8dca1d04e24-catalog-content\") pod \"certified-operators-tgg5p\" (UID: \"32abbcb8-969c-48c1-a041-c8dca1d04e24\") " pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.518656 4769 patch_prober.go:28] interesting pod/router-default-5444994796-xnztd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 07:19:14 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Oct 06 07:19:14 crc kubenswrapper[4769]: [+]process-running ok Oct 06 07:19:14 crc kubenswrapper[4769]: healthz check failed Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.518734 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xnztd" podUID="9bab245c-413d-4d13-949e-02e428400df4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.528700 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrrc8\" (UniqueName: \"kubernetes.io/projected/32abbcb8-969c-48c1-a041-c8dca1d04e24-kube-api-access-vrrc8\") pod \"certified-operators-tgg5p\" (UID: \"32abbcb8-969c-48c1-a041-c8dca1d04e24\") " pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.599284 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d641e8-e528-4cff-a83b-21b7a7dd200b-catalog-content\") pod \"community-operators-g67x9\" (UID: \"37d641e8-e528-4cff-a83b-21b7a7dd200b\") " pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.599674 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d641e8-e528-4cff-a83b-21b7a7dd200b-utilities\") pod \"community-operators-g67x9\" (UID: \"37d641e8-e528-4cff-a83b-21b7a7dd200b\") " pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.599711 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzl79\" (UniqueName: \"kubernetes.io/projected/37d641e8-e528-4cff-a83b-21b7a7dd200b-kube-api-access-zzl79\") pod \"community-operators-g67x9\" (UID: \"37d641e8-e528-4cff-a83b-21b7a7dd200b\") " pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.600025 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.609348 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.640311 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wpwwf"] Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.641351 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.654201 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wpwwf"] Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.701243 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d641e8-e528-4cff-a83b-21b7a7dd200b-catalog-content\") pod \"community-operators-g67x9\" (UID: \"37d641e8-e528-4cff-a83b-21b7a7dd200b\") " pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.701336 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.701367 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d641e8-e528-4cff-a83b-21b7a7dd200b-utilities\") pod \"community-operators-g67x9\" (UID: \"37d641e8-e528-4cff-a83b-21b7a7dd200b\") " pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.701402 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzl79\" (UniqueName: \"kubernetes.io/projected/37d641e8-e528-4cff-a83b-21b7a7dd200b-kube-api-access-zzl79\") pod \"community-operators-g67x9\" (UID: \"37d641e8-e528-4cff-a83b-21b7a7dd200b\") " pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.702342 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d641e8-e528-4cff-a83b-21b7a7dd200b-catalog-content\") pod \"community-operators-g67x9\" (UID: \"37d641e8-e528-4cff-a83b-21b7a7dd200b\") " pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.703185 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d641e8-e528-4cff-a83b-21b7a7dd200b-utilities\") pod \"community-operators-g67x9\" (UID: \"37d641e8-e528-4cff-a83b-21b7a7dd200b\") " pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.723377 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzl79\" (UniqueName: \"kubernetes.io/projected/37d641e8-e528-4cff-a83b-21b7a7dd200b-kube-api-access-zzl79\") pod \"community-operators-g67x9\" (UID: \"37d641e8-e528-4cff-a83b-21b7a7dd200b\") " pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.731811 4769 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.731854 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.780960 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"560bbc8b-e88c-4763-911f-1cbe20773590","Type":"ContainerStarted","Data":"d5d6f502acb9cc12e872a84489951d3d3d38cb796b0588bce1cea1c5a39130a9"} Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.786616 4769 generic.go:334] "Generic (PLEG): container finished" podID="39a765da-9609-4f66-8444-51c13efe3d3c" containerID="7c3bf6060476df19db1dfd0e9b3c68f0205bc91a69d80093edf7e3d01204e6e1" exitCode=0 Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.786715 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" event={"ID":"39a765da-9609-4f66-8444-51c13efe3d3c","Type":"ContainerDied","Data":"7c3bf6060476df19db1dfd0e9b3c68f0205bc91a69d80093edf7e3d01204e6e1"} Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.788783 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ckl4"] Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.792231 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" event={"ID":"f81eb0e9-5f14-40e6-a457-af787a30fca4","Type":"ContainerStarted","Data":"21c8044716765e724bf8b298ebd8e768cdba2d653ddfee3445c00dc1032f4db3"} Oct 06 07:19:14 crc kubenswrapper[4769]: W1006 07:19:14.797884 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ee769cc_e37f_4ba6_80eb_c7f0cf55dee8.slice/crio-a3a55b795a5a90556931959b2ad9e1f1d989cec94c2dc86d3d642780d85d0bdd WatchSource:0}: Error finding container a3a55b795a5a90556931959b2ad9e1f1d989cec94c2dc86d3d642780d85d0bdd: Status 404 returned error can't find the container with id a3a55b795a5a90556931959b2ad9e1f1d989cec94c2dc86d3d642780d85d0bdd Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.803520 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkqpd\" (UniqueName: \"kubernetes.io/projected/532d76f2-810d-4ece-b679-b206fed5b5f7-kube-api-access-mkqpd\") pod \"certified-operators-wpwwf\" (UID: \"532d76f2-810d-4ece-b679-b206fed5b5f7\") " pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.803604 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532d76f2-810d-4ece-b679-b206fed5b5f7-catalog-content\") pod \"certified-operators-wpwwf\" (UID: \"532d76f2-810d-4ece-b679-b206fed5b5f7\") " pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.803629 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532d76f2-810d-4ece-b679-b206fed5b5f7-utilities\") pod \"certified-operators-wpwwf\" (UID: \"532d76f2-810d-4ece-b679-b206fed5b5f7\") " pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.805978 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.826716 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-sdgpd" podStartSLOduration=10.826698350000001 podStartE2EDuration="10.82669835s" podCreationTimestamp="2025-10-06 07:19:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:14.824017256 +0000 UTC m=+151.348298403" watchObservedRunningTime="2025-10-06 07:19:14.82669835 +0000 UTC m=+151.350979497" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.847446 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-h7lhw\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.904617 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkqpd\" (UniqueName: \"kubernetes.io/projected/532d76f2-810d-4ece-b679-b206fed5b5f7-kube-api-access-mkqpd\") pod \"certified-operators-wpwwf\" (UID: \"532d76f2-810d-4ece-b679-b206fed5b5f7\") " pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.904727 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532d76f2-810d-4ece-b679-b206fed5b5f7-catalog-content\") pod \"certified-operators-wpwwf\" (UID: \"532d76f2-810d-4ece-b679-b206fed5b5f7\") " pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.904781 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532d76f2-810d-4ece-b679-b206fed5b5f7-utilities\") pod \"certified-operators-wpwwf\" (UID: \"532d76f2-810d-4ece-b679-b206fed5b5f7\") " pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.905300 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532d76f2-810d-4ece-b679-b206fed5b5f7-utilities\") pod \"certified-operators-wpwwf\" (UID: \"532d76f2-810d-4ece-b679-b206fed5b5f7\") " pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.906784 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532d76f2-810d-4ece-b679-b206fed5b5f7-catalog-content\") pod \"certified-operators-wpwwf\" (UID: \"532d76f2-810d-4ece-b679-b206fed5b5f7\") " pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.913474 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgg5p"] Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.933331 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkqpd\" (UniqueName: \"kubernetes.io/projected/532d76f2-810d-4ece-b679-b206fed5b5f7-kube-api-access-mkqpd\") pod \"certified-operators-wpwwf\" (UID: \"532d76f2-810d-4ece-b679-b206fed5b5f7\") " pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.965256 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:14 crc kubenswrapper[4769]: I1006 07:19:14.992185 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.028108 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g67x9"] Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.151240 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h7lhw"] Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.216788 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wpwwf"] Oct 06 07:19:15 crc kubenswrapper[4769]: W1006 07:19:15.465488 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod532d76f2_810d_4ece_b679_b206fed5b5f7.slice/crio-b7a5ef06ec523d0130e080d414b6d71c76a55dc2960b09146c980139a2e342ec WatchSource:0}: Error finding container b7a5ef06ec523d0130e080d414b6d71c76a55dc2960b09146c980139a2e342ec: Status 404 returned error can't find the container with id b7a5ef06ec523d0130e080d414b6d71c76a55dc2960b09146c980139a2e342ec Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.516075 4769 patch_prober.go:28] interesting pod/router-default-5444994796-xnztd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 07:19:15 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Oct 06 07:19:15 crc kubenswrapper[4769]: [+]process-running ok Oct 06 07:19:15 crc kubenswrapper[4769]: healthz check failed Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.516134 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xnztd" podUID="9bab245c-413d-4d13-949e-02e428400df4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.798116 4769 generic.go:334] "Generic (PLEG): container finished" podID="32abbcb8-969c-48c1-a041-c8dca1d04e24" containerID="ff055f9c0d2765d56b58e13581bac680d5bccccc6a7a2f7938207ee14a3f3848" exitCode=0 Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.798185 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgg5p" event={"ID":"32abbcb8-969c-48c1-a041-c8dca1d04e24","Type":"ContainerDied","Data":"ff055f9c0d2765d56b58e13581bac680d5bccccc6a7a2f7938207ee14a3f3848"} Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.798472 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgg5p" event={"ID":"32abbcb8-969c-48c1-a041-c8dca1d04e24","Type":"ContainerStarted","Data":"f63e0d8acfce4fb44f5ba0ff05bbe52d183d6b178f6c5a17d5d1ef5f27dc5d2e"} Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.800269 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.800504 4769 generic.go:334] "Generic (PLEG): container finished" podID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" containerID="312f7b86135d5584ce03aafe797dc7f0179c41d3e5f0882dc81d9dc14b13619e" exitCode=0 Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.800557 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ckl4" event={"ID":"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8","Type":"ContainerDied","Data":"312f7b86135d5584ce03aafe797dc7f0179c41d3e5f0882dc81d9dc14b13619e"} Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.800573 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ckl4" event={"ID":"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8","Type":"ContainerStarted","Data":"a3a55b795a5a90556931959b2ad9e1f1d989cec94c2dc86d3d642780d85d0bdd"} Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.803308 4769 generic.go:334] "Generic (PLEG): container finished" podID="560bbc8b-e88c-4763-911f-1cbe20773590" containerID="92663b3f61866d084b52f0520f19e8198aeb0fecb926b07408714722fb21f536" exitCode=0 Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.803346 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"560bbc8b-e88c-4763-911f-1cbe20773590","Type":"ContainerDied","Data":"92663b3f61866d084b52f0520f19e8198aeb0fecb926b07408714722fb21f536"} Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.804922 4769 generic.go:334] "Generic (PLEG): container finished" podID="532d76f2-810d-4ece-b679-b206fed5b5f7" containerID="7304f88799c782b2a3e3a4adec8ca849ba73f356e3a35d40a6fc7b1e0a5c4806" exitCode=0 Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.804949 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpwwf" event={"ID":"532d76f2-810d-4ece-b679-b206fed5b5f7","Type":"ContainerDied","Data":"7304f88799c782b2a3e3a4adec8ca849ba73f356e3a35d40a6fc7b1e0a5c4806"} Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.804995 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpwwf" event={"ID":"532d76f2-810d-4ece-b679-b206fed5b5f7","Type":"ContainerStarted","Data":"b7a5ef06ec523d0130e080d414b6d71c76a55dc2960b09146c980139a2e342ec"} Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.808410 4769 generic.go:334] "Generic (PLEG): container finished" podID="37d641e8-e528-4cff-a83b-21b7a7dd200b" containerID="2fe4abd49ff81766a884a8df0fbeb5aa4ec87d7cc602519c1ad18b0fe19e32e6" exitCode=0 Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.808477 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g67x9" event={"ID":"37d641e8-e528-4cff-a83b-21b7a7dd200b","Type":"ContainerDied","Data":"2fe4abd49ff81766a884a8df0fbeb5aa4ec87d7cc602519c1ad18b0fe19e32e6"} Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.808553 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g67x9" event={"ID":"37d641e8-e528-4cff-a83b-21b7a7dd200b","Type":"ContainerStarted","Data":"6d316c0a1ebdf587cddfe1af44d8666ab643a10e6f5f03406e7013e65db571b9"} Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.810349 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" event={"ID":"4af768db-1836-4f0b-a47f-1b5b609c5703","Type":"ContainerStarted","Data":"94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4"} Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.810402 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" event={"ID":"4af768db-1836-4f0b-a47f-1b5b609c5703","Type":"ContainerStarted","Data":"ec682835aeaf4cc007230181d482b1dd74ffc9e12e637c324d9680f802adbc5d"} Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.846067 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" podStartSLOduration=130.846048313 podStartE2EDuration="2m10.846048313s" podCreationTimestamp="2025-10-06 07:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:15.844637835 +0000 UTC m=+152.368918982" watchObservedRunningTime="2025-10-06 07:19:15.846048313 +0000 UTC m=+152.370329470" Oct 06 07:19:15 crc kubenswrapper[4769]: I1006 07:19:15.944352 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6bj7f" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.036389 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.117195 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39a765da-9609-4f66-8444-51c13efe3d3c-secret-volume\") pod \"39a765da-9609-4f66-8444-51c13efe3d3c\" (UID: \"39a765da-9609-4f66-8444-51c13efe3d3c\") " Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.117282 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39a765da-9609-4f66-8444-51c13efe3d3c-config-volume\") pod \"39a765da-9609-4f66-8444-51c13efe3d3c\" (UID: \"39a765da-9609-4f66-8444-51c13efe3d3c\") " Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.117365 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgjkw\" (UniqueName: \"kubernetes.io/projected/39a765da-9609-4f66-8444-51c13efe3d3c-kube-api-access-bgjkw\") pod \"39a765da-9609-4f66-8444-51c13efe3d3c\" (UID: \"39a765da-9609-4f66-8444-51c13efe3d3c\") " Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.118187 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39a765da-9609-4f66-8444-51c13efe3d3c-config-volume" (OuterVolumeSpecName: "config-volume") pod "39a765da-9609-4f66-8444-51c13efe3d3c" (UID: "39a765da-9609-4f66-8444-51c13efe3d3c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.122318 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39a765da-9609-4f66-8444-51c13efe3d3c-kube-api-access-bgjkw" (OuterVolumeSpecName: "kube-api-access-bgjkw") pod "39a765da-9609-4f66-8444-51c13efe3d3c" (UID: "39a765da-9609-4f66-8444-51c13efe3d3c"). InnerVolumeSpecName "kube-api-access-bgjkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.122671 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39a765da-9609-4f66-8444-51c13efe3d3c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "39a765da-9609-4f66-8444-51c13efe3d3c" (UID: "39a765da-9609-4f66-8444-51c13efe3d3c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.174151 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.219403 4769 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/39a765da-9609-4f66-8444-51c13efe3d3c-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.219520 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39a765da-9609-4f66-8444-51c13efe3d3c-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.219543 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgjkw\" (UniqueName: \"kubernetes.io/projected/39a765da-9609-4f66-8444-51c13efe3d3c-kube-api-access-bgjkw\") on node \"crc\" DevicePath \"\"" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.235179 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q79hs"] Oct 06 07:19:16 crc kubenswrapper[4769]: E1006 07:19:16.235475 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39a765da-9609-4f66-8444-51c13efe3d3c" containerName="collect-profiles" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.235495 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="39a765da-9609-4f66-8444-51c13efe3d3c" containerName="collect-profiles" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.235703 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="39a765da-9609-4f66-8444-51c13efe3d3c" containerName="collect-profiles" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.236643 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.238922 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.248375 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q79hs"] Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.321779 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae764ed-f939-4eb6-aa83-af373e51ad2f-utilities\") pod \"redhat-marketplace-q79hs\" (UID: \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\") " pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.321846 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae764ed-f939-4eb6-aa83-af373e51ad2f-catalog-content\") pod \"redhat-marketplace-q79hs\" (UID: \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\") " pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.321903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsfdd\" (UniqueName: \"kubernetes.io/projected/7ae764ed-f939-4eb6-aa83-af373e51ad2f-kube-api-access-qsfdd\") pod \"redhat-marketplace-q79hs\" (UID: \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\") " pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.423179 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae764ed-f939-4eb6-aa83-af373e51ad2f-catalog-content\") pod \"redhat-marketplace-q79hs\" (UID: \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\") " pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.424230 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae764ed-f939-4eb6-aa83-af373e51ad2f-catalog-content\") pod \"redhat-marketplace-q79hs\" (UID: \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\") " pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.424394 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsfdd\" (UniqueName: \"kubernetes.io/projected/7ae764ed-f939-4eb6-aa83-af373e51ad2f-kube-api-access-qsfdd\") pod \"redhat-marketplace-q79hs\" (UID: \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\") " pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.424847 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae764ed-f939-4eb6-aa83-af373e51ad2f-utilities\") pod \"redhat-marketplace-q79hs\" (UID: \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\") " pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.425100 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae764ed-f939-4eb6-aa83-af373e51ad2f-utilities\") pod \"redhat-marketplace-q79hs\" (UID: \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\") " pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.430860 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.431213 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.432523 4769 patch_prober.go:28] interesting pod/console-f9d7485db-lsg5p container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.432576 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-lsg5p" podUID="7e16e210-5266-45ae-9f3d-c214c5c173a4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.16:8443/health\": dial tcp 10.217.0.16:8443: connect: connection refused" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.444734 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsfdd\" (UniqueName: \"kubernetes.io/projected/7ae764ed-f939-4eb6-aa83-af373e51ad2f-kube-api-access-qsfdd\") pod \"redhat-marketplace-q79hs\" (UID: \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\") " pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.511140 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-s2fwl" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.513891 4769 patch_prober.go:28] interesting pod/router-default-5444994796-xnztd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 07:19:16 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Oct 06 07:19:16 crc kubenswrapper[4769]: [+]process-running ok Oct 06 07:19:16 crc kubenswrapper[4769]: healthz check failed Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.514054 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xnztd" podUID="9bab245c-413d-4d13-949e-02e428400df4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.554877 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.558834 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.632900 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hvt6h"] Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.634248 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.648708 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvt6h"] Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.680220 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.680248 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.708974 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.733324 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mtdp\" (UniqueName: \"kubernetes.io/projected/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-kube-api-access-4mtdp\") pod \"redhat-marketplace-hvt6h\" (UID: \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\") " pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.733371 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-utilities\") pod \"redhat-marketplace-hvt6h\" (UID: \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\") " pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.733458 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-catalog-content\") pod \"redhat-marketplace-hvt6h\" (UID: \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\") " pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.830076 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q79hs"] Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.834397 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mtdp\" (UniqueName: \"kubernetes.io/projected/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-kube-api-access-4mtdp\") pod \"redhat-marketplace-hvt6h\" (UID: \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\") " pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.834541 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-utilities\") pod \"redhat-marketplace-hvt6h\" (UID: \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\") " pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.834638 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-catalog-content\") pod \"redhat-marketplace-hvt6h\" (UID: \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\") " pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.836233 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-utilities\") pod \"redhat-marketplace-hvt6h\" (UID: \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\") " pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.836947 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-catalog-content\") pod \"redhat-marketplace-hvt6h\" (UID: \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\") " pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.849335 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" event={"ID":"39a765da-9609-4f66-8444-51c13efe3d3c","Type":"ContainerDied","Data":"2c7bdc629b5ec2fea738a0e706302e874457727865f10fad60e1244886db8dca"} Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.849398 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c7bdc629b5ec2fea738a0e706302e874457727865f10fad60e1244886db8dca" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.849882 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.850959 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:16 crc kubenswrapper[4769]: W1006 07:19:16.862311 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ae764ed_f939_4eb6_aa83_af373e51ad2f.slice/crio-229eea96e260687b8294035f7fd9c61cb7dfcefaf38d6be708322cb39def4b8f WatchSource:0}: Error finding container 229eea96e260687b8294035f7fd9c61cb7dfcefaf38d6be708322cb39def4b8f: Status 404 returned error can't find the container with id 229eea96e260687b8294035f7fd9c61cb7dfcefaf38d6be708322cb39def4b8f Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.864210 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8f7sk" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.868454 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mtdp\" (UniqueName: \"kubernetes.io/projected/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-kube-api-access-4mtdp\") pod \"redhat-marketplace-hvt6h\" (UID: \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\") " pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.981524 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:19:16 crc kubenswrapper[4769]: I1006 07:19:16.993279 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.008953 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zhqnp" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.136326 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-l5vs7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.136370 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l5vs7" podUID="b3761f9e-65a2-4d35-8028-20f7492cea9a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.136760 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-l5vs7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.136777 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l5vs7" podUID="b3761f9e-65a2-4d35-8028-20f7492cea9a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.229452 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dt4qr"] Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.230469 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.236881 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.242502 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dt4qr"] Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.276780 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.391759 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/560bbc8b-e88c-4763-911f-1cbe20773590-kubelet-dir\") pod \"560bbc8b-e88c-4763-911f-1cbe20773590\" (UID: \"560bbc8b-e88c-4763-911f-1cbe20773590\") " Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.391808 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/560bbc8b-e88c-4763-911f-1cbe20773590-kube-api-access\") pod \"560bbc8b-e88c-4763-911f-1cbe20773590\" (UID: \"560bbc8b-e88c-4763-911f-1cbe20773590\") " Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.392071 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112b0f4b-079f-4103-83cd-25d7c215c3a9-catalog-content\") pod \"redhat-operators-dt4qr\" (UID: \"112b0f4b-079f-4103-83cd-25d7c215c3a9\") " pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.392106 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktzgf\" (UniqueName: \"kubernetes.io/projected/112b0f4b-079f-4103-83cd-25d7c215c3a9-kube-api-access-ktzgf\") pod \"redhat-operators-dt4qr\" (UID: \"112b0f4b-079f-4103-83cd-25d7c215c3a9\") " pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.392171 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112b0f4b-079f-4103-83cd-25d7c215c3a9-utilities\") pod \"redhat-operators-dt4qr\" (UID: \"112b0f4b-079f-4103-83cd-25d7c215c3a9\") " pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.392263 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/560bbc8b-e88c-4763-911f-1cbe20773590-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "560bbc8b-e88c-4763-911f-1cbe20773590" (UID: "560bbc8b-e88c-4763-911f-1cbe20773590"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.405680 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/560bbc8b-e88c-4763-911f-1cbe20773590-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "560bbc8b-e88c-4763-911f-1cbe20773590" (UID: "560bbc8b-e88c-4763-911f-1cbe20773590"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.493529 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktzgf\" (UniqueName: \"kubernetes.io/projected/112b0f4b-079f-4103-83cd-25d7c215c3a9-kube-api-access-ktzgf\") pod \"redhat-operators-dt4qr\" (UID: \"112b0f4b-079f-4103-83cd-25d7c215c3a9\") " pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.493625 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112b0f4b-079f-4103-83cd-25d7c215c3a9-utilities\") pod \"redhat-operators-dt4qr\" (UID: \"112b0f4b-079f-4103-83cd-25d7c215c3a9\") " pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.493660 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112b0f4b-079f-4103-83cd-25d7c215c3a9-catalog-content\") pod \"redhat-operators-dt4qr\" (UID: \"112b0f4b-079f-4103-83cd-25d7c215c3a9\") " pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.493699 4769 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/560bbc8b-e88c-4763-911f-1cbe20773590-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.493712 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/560bbc8b-e88c-4763-911f-1cbe20773590-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.494235 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112b0f4b-079f-4103-83cd-25d7c215c3a9-catalog-content\") pod \"redhat-operators-dt4qr\" (UID: \"112b0f4b-079f-4103-83cd-25d7c215c3a9\") " pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.494695 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112b0f4b-079f-4103-83cd-25d7c215c3a9-utilities\") pod \"redhat-operators-dt4qr\" (UID: \"112b0f4b-079f-4103-83cd-25d7c215c3a9\") " pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.509018 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.527234 4769 patch_prober.go:28] interesting pod/router-default-5444994796-xnztd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 07:19:17 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Oct 06 07:19:17 crc kubenswrapper[4769]: [+]process-running ok Oct 06 07:19:17 crc kubenswrapper[4769]: healthz check failed Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.527308 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xnztd" podUID="9bab245c-413d-4d13-949e-02e428400df4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.529389 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktzgf\" (UniqueName: \"kubernetes.io/projected/112b0f4b-079f-4103-83cd-25d7c215c3a9-kube-api-access-ktzgf\") pod \"redhat-operators-dt4qr\" (UID: \"112b0f4b-079f-4103-83cd-25d7c215c3a9\") " pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.557702 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.627233 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6fdc5"] Oct 06 07:19:17 crc kubenswrapper[4769]: E1006 07:19:17.662047 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560bbc8b-e88c-4763-911f-1cbe20773590" containerName="pruner" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.662076 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="560bbc8b-e88c-4763-911f-1cbe20773590" containerName="pruner" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.662218 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="560bbc8b-e88c-4763-911f-1cbe20773590" containerName="pruner" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.662868 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6fdc5"] Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.662952 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.677999 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvt6h"] Oct 06 07:19:17 crc kubenswrapper[4769]: W1006 07:19:17.679851 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a2d32ea_cdc3_4c1e_b33b_c6d20afbb808.slice/crio-bafa834de0576091a875d61d868d073c300434e4112b307eb21f35fd66456a85 WatchSource:0}: Error finding container bafa834de0576091a875d61d868d073c300434e4112b307eb21f35fd66456a85: Status 404 returned error can't find the container with id bafa834de0576091a875d61d868d073c300434e4112b307eb21f35fd66456a85 Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.807690 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a369720-ae07-4cc3-8f51-0805ad87ec14-utilities\") pod \"redhat-operators-6fdc5\" (UID: \"5a369720-ae07-4cc3-8f51-0805ad87ec14\") " pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.807794 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a369720-ae07-4cc3-8f51-0805ad87ec14-catalog-content\") pod \"redhat-operators-6fdc5\" (UID: \"5a369720-ae07-4cc3-8f51-0805ad87ec14\") " pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.807821 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9tf\" (UniqueName: \"kubernetes.io/projected/5a369720-ae07-4cc3-8f51-0805ad87ec14-kube-api-access-6t9tf\") pod \"redhat-operators-6fdc5\" (UID: \"5a369720-ae07-4cc3-8f51-0805ad87ec14\") " pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.875824 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvt6h" event={"ID":"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808","Type":"ContainerStarted","Data":"bafa834de0576091a875d61d868d073c300434e4112b307eb21f35fd66456a85"} Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.891236 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"560bbc8b-e88c-4763-911f-1cbe20773590","Type":"ContainerDied","Data":"d5d6f502acb9cc12e872a84489951d3d3d38cb796b0588bce1cea1c5a39130a9"} Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.891268 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.891280 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d6f502acb9cc12e872a84489951d3d3d38cb796b0588bce1cea1c5a39130a9" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.894988 4769 generic.go:334] "Generic (PLEG): container finished" podID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" containerID="0f89f7c4ec4da046c50cbdb58978f1dac27353d1fd90c4061a8f4246bb30c9b0" exitCode=0 Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.895233 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79hs" event={"ID":"7ae764ed-f939-4eb6-aa83-af373e51ad2f","Type":"ContainerDied","Data":"0f89f7c4ec4da046c50cbdb58978f1dac27353d1fd90c4061a8f4246bb30c9b0"} Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.895285 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79hs" event={"ID":"7ae764ed-f939-4eb6-aa83-af373e51ad2f","Type":"ContainerStarted","Data":"229eea96e260687b8294035f7fd9c61cb7dfcefaf38d6be708322cb39def4b8f"} Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.908863 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a369720-ae07-4cc3-8f51-0805ad87ec14-utilities\") pod \"redhat-operators-6fdc5\" (UID: \"5a369720-ae07-4cc3-8f51-0805ad87ec14\") " pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.908920 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a369720-ae07-4cc3-8f51-0805ad87ec14-catalog-content\") pod \"redhat-operators-6fdc5\" (UID: \"5a369720-ae07-4cc3-8f51-0805ad87ec14\") " pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.908937 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t9tf\" (UniqueName: \"kubernetes.io/projected/5a369720-ae07-4cc3-8f51-0805ad87ec14-kube-api-access-6t9tf\") pod \"redhat-operators-6fdc5\" (UID: \"5a369720-ae07-4cc3-8f51-0805ad87ec14\") " pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.909901 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a369720-ae07-4cc3-8f51-0805ad87ec14-utilities\") pod \"redhat-operators-6fdc5\" (UID: \"5a369720-ae07-4cc3-8f51-0805ad87ec14\") " pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.909953 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a369720-ae07-4cc3-8f51-0805ad87ec14-catalog-content\") pod \"redhat-operators-6fdc5\" (UID: \"5a369720-ae07-4cc3-8f51-0805ad87ec14\") " pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.935408 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t9tf\" (UniqueName: \"kubernetes.io/projected/5a369720-ae07-4cc3-8f51-0805ad87ec14-kube-api-access-6t9tf\") pod \"redhat-operators-6fdc5\" (UID: \"5a369720-ae07-4cc3-8f51-0805ad87ec14\") " pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:17 crc kubenswrapper[4769]: I1006 07:19:17.938656 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dt4qr"] Oct 06 07:19:18 crc kubenswrapper[4769]: I1006 07:19:18.003004 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:18 crc kubenswrapper[4769]: I1006 07:19:18.516032 4769 patch_prober.go:28] interesting pod/router-default-5444994796-xnztd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 07:19:18 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Oct 06 07:19:18 crc kubenswrapper[4769]: [+]process-running ok Oct 06 07:19:18 crc kubenswrapper[4769]: healthz check failed Oct 06 07:19:18 crc kubenswrapper[4769]: I1006 07:19:18.516108 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xnztd" podUID="9bab245c-413d-4d13-949e-02e428400df4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 07:19:18 crc kubenswrapper[4769]: I1006 07:19:18.519970 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6fdc5"] Oct 06 07:19:18 crc kubenswrapper[4769]: W1006 07:19:18.555596 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a369720_ae07_4cc3_8f51_0805ad87ec14.slice/crio-352d76e1472d5c7e845b1b8c4aa1ae9d564c49a0a96fb67fb00339db1b5326bd WatchSource:0}: Error finding container 352d76e1472d5c7e845b1b8c4aa1ae9d564c49a0a96fb67fb00339db1b5326bd: Status 404 returned error can't find the container with id 352d76e1472d5c7e845b1b8c4aa1ae9d564c49a0a96fb67fb00339db1b5326bd Oct 06 07:19:18 crc kubenswrapper[4769]: I1006 07:19:18.954733 4769 generic.go:334] "Generic (PLEG): container finished" podID="112b0f4b-079f-4103-83cd-25d7c215c3a9" containerID="c7e9c5f7aea2183e061da8fa97842109398e916cbdb98277de1ac3bccf5f7a55" exitCode=0 Oct 06 07:19:18 crc kubenswrapper[4769]: I1006 07:19:18.954877 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt4qr" event={"ID":"112b0f4b-079f-4103-83cd-25d7c215c3a9","Type":"ContainerDied","Data":"c7e9c5f7aea2183e061da8fa97842109398e916cbdb98277de1ac3bccf5f7a55"} Oct 06 07:19:18 crc kubenswrapper[4769]: I1006 07:19:18.954904 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt4qr" event={"ID":"112b0f4b-079f-4103-83cd-25d7c215c3a9","Type":"ContainerStarted","Data":"1fb5cbf9741596a835c3efccef3690520cfdffd943f8d7a12c670fd98dac063b"} Oct 06 07:19:18 crc kubenswrapper[4769]: I1006 07:19:18.959038 4769 generic.go:334] "Generic (PLEG): container finished" podID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" containerID="1e70a050c86df40da0a04c1eeab4210f4f4112062f0877bd98c87639c6fbe1af" exitCode=0 Oct 06 07:19:18 crc kubenswrapper[4769]: I1006 07:19:18.959225 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvt6h" event={"ID":"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808","Type":"ContainerDied","Data":"1e70a050c86df40da0a04c1eeab4210f4f4112062f0877bd98c87639c6fbe1af"} Oct 06 07:19:18 crc kubenswrapper[4769]: I1006 07:19:18.963017 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fdc5" event={"ID":"5a369720-ae07-4cc3-8f51-0805ad87ec14","Type":"ContainerStarted","Data":"64535f033b7e97b7f1f33a9ac6c4a5fd00fe6d5c3dbbd37eeb73b1540c7e5fb9"} Oct 06 07:19:18 crc kubenswrapper[4769]: I1006 07:19:18.963046 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fdc5" event={"ID":"5a369720-ae07-4cc3-8f51-0805ad87ec14","Type":"ContainerStarted","Data":"352d76e1472d5c7e845b1b8c4aa1ae9d564c49a0a96fb67fb00339db1b5326bd"} Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.070892 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pfz99" Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.382142 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.382873 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.386531 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.392301 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.417415 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.463252 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8322e4aa-6d41-453d-b3ad-4cd7d6c8962e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.463309 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8322e4aa-6d41-453d-b3ad-4cd7d6c8962e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.512240 4769 patch_prober.go:28] interesting pod/router-default-5444994796-xnztd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 06 07:19:19 crc kubenswrapper[4769]: [-]has-synced failed: reason withheld Oct 06 07:19:19 crc kubenswrapper[4769]: [+]process-running ok Oct 06 07:19:19 crc kubenswrapper[4769]: healthz check failed Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.512306 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-xnztd" podUID="9bab245c-413d-4d13-949e-02e428400df4" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.564599 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8322e4aa-6d41-453d-b3ad-4cd7d6c8962e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.564676 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8322e4aa-6d41-453d-b3ad-4cd7d6c8962e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.564764 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8322e4aa-6d41-453d-b3ad-4cd7d6c8962e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.599808 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8322e4aa-6d41-453d-b3ad-4cd7d6c8962e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.707306 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.983934 4769 generic.go:334] "Generic (PLEG): container finished" podID="5a369720-ae07-4cc3-8f51-0805ad87ec14" containerID="64535f033b7e97b7f1f33a9ac6c4a5fd00fe6d5c3dbbd37eeb73b1540c7e5fb9" exitCode=0 Oct 06 07:19:19 crc kubenswrapper[4769]: I1006 07:19:19.984083 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fdc5" event={"ID":"5a369720-ae07-4cc3-8f51-0805ad87ec14","Type":"ContainerDied","Data":"64535f033b7e97b7f1f33a9ac6c4a5fd00fe6d5c3dbbd37eeb73b1540c7e5fb9"} Oct 06 07:19:20 crc kubenswrapper[4769]: I1006 07:19:20.228028 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 06 07:19:20 crc kubenswrapper[4769]: W1006 07:19:20.289824 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod8322e4aa_6d41_453d_b3ad_4cd7d6c8962e.slice/crio-79db699b0d2e8d81138ad50651c24691f4b41678d0edc83ef216cb08dd81f878 WatchSource:0}: Error finding container 79db699b0d2e8d81138ad50651c24691f4b41678d0edc83ef216cb08dd81f878: Status 404 returned error can't find the container with id 79db699b0d2e8d81138ad50651c24691f4b41678d0edc83ef216cb08dd81f878 Oct 06 07:19:20 crc kubenswrapper[4769]: I1006 07:19:20.512533 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:20 crc kubenswrapper[4769]: I1006 07:19:20.516952 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-xnztd" Oct 06 07:19:20 crc kubenswrapper[4769]: I1006 07:19:20.998864 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e","Type":"ContainerStarted","Data":"79db699b0d2e8d81138ad50651c24691f4b41678d0edc83ef216cb08dd81f878"} Oct 06 07:19:22 crc kubenswrapper[4769]: I1006 07:19:22.010107 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e","Type":"ContainerStarted","Data":"231c43f8be72c2d0572e49fafdbb324e11860a4fcf62303ceed8d9e947b46f53"} Oct 06 07:19:22 crc kubenswrapper[4769]: I1006 07:19:22.026004 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.025987216 podStartE2EDuration="3.025987216s" podCreationTimestamp="2025-10-06 07:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:22.023176928 +0000 UTC m=+158.547458075" watchObservedRunningTime="2025-10-06 07:19:22.025987216 +0000 UTC m=+158.550268363" Oct 06 07:19:22 crc kubenswrapper[4769]: I1006 07:19:22.245814 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:19:22 crc kubenswrapper[4769]: I1006 07:19:22.245913 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:19:23 crc kubenswrapper[4769]: I1006 07:19:23.018168 4769 generic.go:334] "Generic (PLEG): container finished" podID="8322e4aa-6d41-453d-b3ad-4cd7d6c8962e" containerID="231c43f8be72c2d0572e49fafdbb324e11860a4fcf62303ceed8d9e947b46f53" exitCode=0 Oct 06 07:19:23 crc kubenswrapper[4769]: I1006 07:19:23.018247 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e","Type":"ContainerDied","Data":"231c43f8be72c2d0572e49fafdbb324e11860a4fcf62303ceed8d9e947b46f53"} Oct 06 07:19:26 crc kubenswrapper[4769]: I1006 07:19:26.435091 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:26 crc kubenswrapper[4769]: I1006 07:19:26.438973 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:19:27 crc kubenswrapper[4769]: I1006 07:19:27.135309 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-l5vs7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Oct 06 07:19:27 crc kubenswrapper[4769]: I1006 07:19:27.135366 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l5vs7" podUID="b3761f9e-65a2-4d35-8028-20f7492cea9a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Oct 06 07:19:27 crc kubenswrapper[4769]: I1006 07:19:27.135387 4769 patch_prober.go:28] interesting pod/downloads-7954f5f757-l5vs7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Oct 06 07:19:27 crc kubenswrapper[4769]: I1006 07:19:27.135452 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l5vs7" podUID="b3761f9e-65a2-4d35-8028-20f7492cea9a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Oct 06 07:19:28 crc kubenswrapper[4769]: I1006 07:19:28.210349 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:19:28 crc kubenswrapper[4769]: I1006 07:19:28.222958 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cbddd0e8-9d17-4278-acdc-e35d2d8d70f9-metrics-certs\") pod \"network-metrics-daemon-wxwxs\" (UID: \"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9\") " pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:19:28 crc kubenswrapper[4769]: I1006 07:19:28.234876 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wxwxs" Oct 06 07:19:34 crc kubenswrapper[4769]: I1006 07:19:34.972238 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:19:35 crc kubenswrapper[4769]: I1006 07:19:35.068732 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 07:19:35 crc kubenswrapper[4769]: I1006 07:19:35.097416 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e","Type":"ContainerDied","Data":"79db699b0d2e8d81138ad50651c24691f4b41678d0edc83ef216cb08dd81f878"} Oct 06 07:19:35 crc kubenswrapper[4769]: I1006 07:19:35.097467 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79db699b0d2e8d81138ad50651c24691f4b41678d0edc83ef216cb08dd81f878" Oct 06 07:19:35 crc kubenswrapper[4769]: I1006 07:19:35.097502 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 06 07:19:35 crc kubenswrapper[4769]: I1006 07:19:35.202571 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8322e4aa-6d41-453d-b3ad-4cd7d6c8962e-kube-api-access\") pod \"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e\" (UID: \"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e\") " Oct 06 07:19:35 crc kubenswrapper[4769]: I1006 07:19:35.202899 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8322e4aa-6d41-453d-b3ad-4cd7d6c8962e-kubelet-dir\") pod \"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e\" (UID: \"8322e4aa-6d41-453d-b3ad-4cd7d6c8962e\") " Oct 06 07:19:35 crc kubenswrapper[4769]: I1006 07:19:35.203009 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8322e4aa-6d41-453d-b3ad-4cd7d6c8962e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8322e4aa-6d41-453d-b3ad-4cd7d6c8962e" (UID: "8322e4aa-6d41-453d-b3ad-4cd7d6c8962e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:19:35 crc kubenswrapper[4769]: I1006 07:19:35.203238 4769 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8322e4aa-6d41-453d-b3ad-4cd7d6c8962e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 06 07:19:35 crc kubenswrapper[4769]: I1006 07:19:35.212489 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8322e4aa-6d41-453d-b3ad-4cd7d6c8962e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8322e4aa-6d41-453d-b3ad-4cd7d6c8962e" (UID: "8322e4aa-6d41-453d-b3ad-4cd7d6c8962e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:19:35 crc kubenswrapper[4769]: I1006 07:19:35.304823 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8322e4aa-6d41-453d-b3ad-4cd7d6c8962e-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 06 07:19:37 crc kubenswrapper[4769]: I1006 07:19:37.153009 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-l5vs7" Oct 06 07:19:38 crc kubenswrapper[4769]: E1006 07:19:38.065262 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 06 07:19:38 crc kubenswrapper[4769]: E1006 07:19:38.065738 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mkqpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-wpwwf_openshift-marketplace(532d76f2-810d-4ece-b679-b206fed5b5f7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 07:19:38 crc kubenswrapper[4769]: E1006 07:19:38.066938 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-wpwwf" podUID="532d76f2-810d-4ece-b679-b206fed5b5f7" Oct 06 07:19:40 crc kubenswrapper[4769]: E1006 07:19:40.676082 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-wpwwf" podUID="532d76f2-810d-4ece-b679-b206fed5b5f7" Oct 06 07:19:41 crc kubenswrapper[4769]: E1006 07:19:41.830248 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 06 07:19:41 crc kubenswrapper[4769]: E1006 07:19:41.830463 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6zm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-7ckl4_openshift-marketplace(4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 07:19:41 crc kubenswrapper[4769]: E1006 07:19:41.831672 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-7ckl4" podUID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" Oct 06 07:19:42 crc kubenswrapper[4769]: E1006 07:19:42.150719 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-7ckl4" podUID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" Oct 06 07:19:44 crc kubenswrapper[4769]: I1006 07:19:44.843597 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wxwxs"] Oct 06 07:19:45 crc kubenswrapper[4769]: W1006 07:19:45.412583 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbddd0e8_9d17_4278_acdc_e35d2d8d70f9.slice/crio-2568ebb582a8b7fff27399c87a1763c9ff8e7cc4165a56decdfd6da86a3b4f85 WatchSource:0}: Error finding container 2568ebb582a8b7fff27399c87a1763c9ff8e7cc4165a56decdfd6da86a3b4f85: Status 404 returned error can't find the container with id 2568ebb582a8b7fff27399c87a1763c9ff8e7cc4165a56decdfd6da86a3b4f85 Oct 06 07:19:45 crc kubenswrapper[4769]: E1006 07:19:45.651783 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 06 07:19:45 crc kubenswrapper[4769]: E1006 07:19:45.652342 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qsfdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-q79hs_openshift-marketplace(7ae764ed-f939-4eb6-aa83-af373e51ad2f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 07:19:45 crc kubenswrapper[4769]: E1006 07:19:45.653944 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-q79hs" podUID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" Oct 06 07:19:45 crc kubenswrapper[4769]: E1006 07:19:45.667096 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 06 07:19:45 crc kubenswrapper[4769]: E1006 07:19:45.667268 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mtdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-hvt6h_openshift-marketplace(1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 06 07:19:45 crc kubenswrapper[4769]: E1006 07:19:45.668852 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-hvt6h" podUID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" Oct 06 07:19:46 crc kubenswrapper[4769]: I1006 07:19:46.179370 4769 generic.go:334] "Generic (PLEG): container finished" podID="37d641e8-e528-4cff-a83b-21b7a7dd200b" containerID="6dc7dbc10a4ee3286a3f1dcf61256bc02f854036109f45cdfb33378d1aee3a4a" exitCode=0 Oct 06 07:19:46 crc kubenswrapper[4769]: I1006 07:19:46.179481 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g67x9" event={"ID":"37d641e8-e528-4cff-a83b-21b7a7dd200b","Type":"ContainerDied","Data":"6dc7dbc10a4ee3286a3f1dcf61256bc02f854036109f45cdfb33378d1aee3a4a"} Oct 06 07:19:46 crc kubenswrapper[4769]: I1006 07:19:46.183147 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fdc5" event={"ID":"5a369720-ae07-4cc3-8f51-0805ad87ec14","Type":"ContainerStarted","Data":"e0d4f38c7ffbc9f2bd0757f31bd13a30d5c2a0b8d0eb43735705615e76acc102"} Oct 06 07:19:46 crc kubenswrapper[4769]: I1006 07:19:46.185473 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgg5p" event={"ID":"32abbcb8-969c-48c1-a041-c8dca1d04e24","Type":"ContainerStarted","Data":"b7c33eadbdadf0afd8505e98a0bc11ccc813dbf814fda186f8d984b1bbd7a9e2"} Oct 06 07:19:46 crc kubenswrapper[4769]: I1006 07:19:46.191126 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" event={"ID":"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9","Type":"ContainerStarted","Data":"6f998ceb9cb39aac8f44c2f20629d02dd68038716f846e2c73164be1cd9beca6"} Oct 06 07:19:46 crc kubenswrapper[4769]: I1006 07:19:46.191169 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" event={"ID":"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9","Type":"ContainerStarted","Data":"2568ebb582a8b7fff27399c87a1763c9ff8e7cc4165a56decdfd6da86a3b4f85"} Oct 06 07:19:46 crc kubenswrapper[4769]: I1006 07:19:46.193209 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt4qr" event={"ID":"112b0f4b-079f-4103-83cd-25d7c215c3a9","Type":"ContainerStarted","Data":"b0aad1db340febf10d57500b980b071c883a9f3bc6fc03090fd874c89ee17d3a"} Oct 06 07:19:46 crc kubenswrapper[4769]: E1006 07:19:46.199711 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-q79hs" podUID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" Oct 06 07:19:46 crc kubenswrapper[4769]: E1006 07:19:46.199931 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-hvt6h" podUID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" Oct 06 07:19:46 crc kubenswrapper[4769]: I1006 07:19:46.867357 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cvx4t" Oct 06 07:19:47 crc kubenswrapper[4769]: I1006 07:19:47.198508 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wxwxs" event={"ID":"cbddd0e8-9d17-4278-acdc-e35d2d8d70f9","Type":"ContainerStarted","Data":"722d6069f010b07bfee16916f4529c28e8c7a19955918ff536b1801a22bec765"} Oct 06 07:19:47 crc kubenswrapper[4769]: I1006 07:19:47.200596 4769 generic.go:334] "Generic (PLEG): container finished" podID="112b0f4b-079f-4103-83cd-25d7c215c3a9" containerID="b0aad1db340febf10d57500b980b071c883a9f3bc6fc03090fd874c89ee17d3a" exitCode=0 Oct 06 07:19:47 crc kubenswrapper[4769]: I1006 07:19:47.200649 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt4qr" event={"ID":"112b0f4b-079f-4103-83cd-25d7c215c3a9","Type":"ContainerDied","Data":"b0aad1db340febf10d57500b980b071c883a9f3bc6fc03090fd874c89ee17d3a"} Oct 06 07:19:47 crc kubenswrapper[4769]: I1006 07:19:47.203680 4769 generic.go:334] "Generic (PLEG): container finished" podID="5a369720-ae07-4cc3-8f51-0805ad87ec14" containerID="e0d4f38c7ffbc9f2bd0757f31bd13a30d5c2a0b8d0eb43735705615e76acc102" exitCode=0 Oct 06 07:19:47 crc kubenswrapper[4769]: I1006 07:19:47.203810 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fdc5" event={"ID":"5a369720-ae07-4cc3-8f51-0805ad87ec14","Type":"ContainerDied","Data":"e0d4f38c7ffbc9f2bd0757f31bd13a30d5c2a0b8d0eb43735705615e76acc102"} Oct 06 07:19:47 crc kubenswrapper[4769]: I1006 07:19:47.207198 4769 generic.go:334] "Generic (PLEG): container finished" podID="32abbcb8-969c-48c1-a041-c8dca1d04e24" containerID="b7c33eadbdadf0afd8505e98a0bc11ccc813dbf814fda186f8d984b1bbd7a9e2" exitCode=0 Oct 06 07:19:47 crc kubenswrapper[4769]: I1006 07:19:47.207247 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgg5p" event={"ID":"32abbcb8-969c-48c1-a041-c8dca1d04e24","Type":"ContainerDied","Data":"b7c33eadbdadf0afd8505e98a0bc11ccc813dbf814fda186f8d984b1bbd7a9e2"} Oct 06 07:19:47 crc kubenswrapper[4769]: I1006 07:19:47.216135 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-wxwxs" podStartSLOduration=163.216115785 podStartE2EDuration="2m43.216115785s" podCreationTimestamp="2025-10-06 07:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:19:47.212936977 +0000 UTC m=+183.737218124" watchObservedRunningTime="2025-10-06 07:19:47.216115785 +0000 UTC m=+183.740396932" Oct 06 07:19:48 crc kubenswrapper[4769]: I1006 07:19:48.216342 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt4qr" event={"ID":"112b0f4b-079f-4103-83cd-25d7c215c3a9","Type":"ContainerStarted","Data":"93588d9054000012b49794283a9eac88b5012e86f2c3f5b0e45ca8acb4a87c3f"} Oct 06 07:19:48 crc kubenswrapper[4769]: I1006 07:19:48.219817 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g67x9" event={"ID":"37d641e8-e528-4cff-a83b-21b7a7dd200b","Type":"ContainerStarted","Data":"a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029"} Oct 06 07:19:48 crc kubenswrapper[4769]: I1006 07:19:48.222378 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fdc5" event={"ID":"5a369720-ae07-4cc3-8f51-0805ad87ec14","Type":"ContainerStarted","Data":"986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398"} Oct 06 07:19:48 crc kubenswrapper[4769]: I1006 07:19:48.224737 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgg5p" event={"ID":"32abbcb8-969c-48c1-a041-c8dca1d04e24","Type":"ContainerStarted","Data":"ad59df40b08afb7c16ab4d6a6e08f78c25a66b4f9dc45cc1058db078ce8d5ce3"} Oct 06 07:19:48 crc kubenswrapper[4769]: I1006 07:19:48.236340 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dt4qr" podStartSLOduration=2.310916014 podStartE2EDuration="31.236324502s" podCreationTimestamp="2025-10-06 07:19:17 +0000 UTC" firstStartedPulling="2025-10-06 07:19:18.959927945 +0000 UTC m=+155.484209092" lastFinishedPulling="2025-10-06 07:19:47.885336413 +0000 UTC m=+184.409617580" observedRunningTime="2025-10-06 07:19:48.235244732 +0000 UTC m=+184.759525879" watchObservedRunningTime="2025-10-06 07:19:48.236324502 +0000 UTC m=+184.760605649" Oct 06 07:19:48 crc kubenswrapper[4769]: I1006 07:19:48.256128 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6fdc5" podStartSLOduration=2.409887034 podStartE2EDuration="31.256109382s" podCreationTimestamp="2025-10-06 07:19:17 +0000 UTC" firstStartedPulling="2025-10-06 07:19:18.964612094 +0000 UTC m=+155.488893241" lastFinishedPulling="2025-10-06 07:19:47.810834442 +0000 UTC m=+184.335115589" observedRunningTime="2025-10-06 07:19:48.255398712 +0000 UTC m=+184.779679869" watchObservedRunningTime="2025-10-06 07:19:48.256109382 +0000 UTC m=+184.780390539" Oct 06 07:19:48 crc kubenswrapper[4769]: I1006 07:19:48.275645 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tgg5p" podStartSLOduration=2.192011893 podStartE2EDuration="34.275628664s" podCreationTimestamp="2025-10-06 07:19:14 +0000 UTC" firstStartedPulling="2025-10-06 07:19:15.800009774 +0000 UTC m=+152.324290921" lastFinishedPulling="2025-10-06 07:19:47.883626515 +0000 UTC m=+184.407907692" observedRunningTime="2025-10-06 07:19:48.274201775 +0000 UTC m=+184.798482972" watchObservedRunningTime="2025-10-06 07:19:48.275628664 +0000 UTC m=+184.799909811" Oct 06 07:19:48 crc kubenswrapper[4769]: I1006 07:19:48.301608 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g67x9" podStartSLOduration=2.445562892 podStartE2EDuration="34.301588907s" podCreationTimestamp="2025-10-06 07:19:14 +0000 UTC" firstStartedPulling="2025-10-06 07:19:15.80962263 +0000 UTC m=+152.333903777" lastFinishedPulling="2025-10-06 07:19:47.665648645 +0000 UTC m=+184.189929792" observedRunningTime="2025-10-06 07:19:48.299952211 +0000 UTC m=+184.824233358" watchObservedRunningTime="2025-10-06 07:19:48.301588907 +0000 UTC m=+184.825870054" Oct 06 07:19:52 crc kubenswrapper[4769]: I1006 07:19:52.245529 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:19:52 crc kubenswrapper[4769]: I1006 07:19:52.245906 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:19:52 crc kubenswrapper[4769]: I1006 07:19:52.598450 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 06 07:19:54 crc kubenswrapper[4769]: I1006 07:19:54.609503 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:54 crc kubenswrapper[4769]: I1006 07:19:54.609869 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:54 crc kubenswrapper[4769]: I1006 07:19:54.807479 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:54 crc kubenswrapper[4769]: I1006 07:19:54.807527 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:55 crc kubenswrapper[4769]: I1006 07:19:55.612647 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:55 crc kubenswrapper[4769]: I1006 07:19:55.614079 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:55 crc kubenswrapper[4769]: I1006 07:19:55.653762 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:19:55 crc kubenswrapper[4769]: I1006 07:19:55.655186 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:56 crc kubenswrapper[4769]: I1006 07:19:56.263007 4769 generic.go:334] "Generic (PLEG): container finished" podID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" containerID="a53c46f60dcddbcd9504fdbd536c62aa28ebabca0acd51324dd4fdeeec557eb5" exitCode=0 Oct 06 07:19:56 crc kubenswrapper[4769]: I1006 07:19:56.263080 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ckl4" event={"ID":"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8","Type":"ContainerDied","Data":"a53c46f60dcddbcd9504fdbd536c62aa28ebabca0acd51324dd4fdeeec557eb5"} Oct 06 07:19:57 crc kubenswrapper[4769]: I1006 07:19:57.271869 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ckl4" event={"ID":"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8","Type":"ContainerStarted","Data":"be75c1899233aa1372f18bda05d24297adcbe9865c8886c892d0bba4618dad35"} Oct 06 07:19:57 crc kubenswrapper[4769]: I1006 07:19:57.274089 4769 generic.go:334] "Generic (PLEG): container finished" podID="532d76f2-810d-4ece-b679-b206fed5b5f7" containerID="5dbe678be5605840f7b63bf71581bd67873a9aaa922da8c847aae2e67a1e915f" exitCode=0 Oct 06 07:19:57 crc kubenswrapper[4769]: I1006 07:19:57.274123 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpwwf" event={"ID":"532d76f2-810d-4ece-b679-b206fed5b5f7","Type":"ContainerDied","Data":"5dbe678be5605840f7b63bf71581bd67873a9aaa922da8c847aae2e67a1e915f"} Oct 06 07:19:57 crc kubenswrapper[4769]: I1006 07:19:57.291306 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7ckl4" podStartSLOduration=2.226645446 podStartE2EDuration="43.291288536s" podCreationTimestamp="2025-10-06 07:19:14 +0000 UTC" firstStartedPulling="2025-10-06 07:19:15.80457692 +0000 UTC m=+152.328858067" lastFinishedPulling="2025-10-06 07:19:56.86922001 +0000 UTC m=+193.393501157" observedRunningTime="2025-10-06 07:19:57.288761465 +0000 UTC m=+193.813042632" watchObservedRunningTime="2025-10-06 07:19:57.291288536 +0000 UTC m=+193.815569683" Oct 06 07:19:57 crc kubenswrapper[4769]: I1006 07:19:57.438687 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g67x9"] Oct 06 07:19:57 crc kubenswrapper[4769]: I1006 07:19:57.438888 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g67x9" podUID="37d641e8-e528-4cff-a83b-21b7a7dd200b" containerName="registry-server" containerID="cri-o://a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029" gracePeriod=2 Oct 06 07:19:57 crc kubenswrapper[4769]: I1006 07:19:57.559061 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:57 crc kubenswrapper[4769]: I1006 07:19:57.559175 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:57 crc kubenswrapper[4769]: I1006 07:19:57.617196 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.003955 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.004165 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.010137 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.048826 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.131960 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d641e8-e528-4cff-a83b-21b7a7dd200b-catalog-content\") pod \"37d641e8-e528-4cff-a83b-21b7a7dd200b\" (UID: \"37d641e8-e528-4cff-a83b-21b7a7dd200b\") " Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.132041 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d641e8-e528-4cff-a83b-21b7a7dd200b-utilities\") pod \"37d641e8-e528-4cff-a83b-21b7a7dd200b\" (UID: \"37d641e8-e528-4cff-a83b-21b7a7dd200b\") " Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.132066 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzl79\" (UniqueName: \"kubernetes.io/projected/37d641e8-e528-4cff-a83b-21b7a7dd200b-kube-api-access-zzl79\") pod \"37d641e8-e528-4cff-a83b-21b7a7dd200b\" (UID: \"37d641e8-e528-4cff-a83b-21b7a7dd200b\") " Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.134250 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37d641e8-e528-4cff-a83b-21b7a7dd200b-utilities" (OuterVolumeSpecName: "utilities") pod "37d641e8-e528-4cff-a83b-21b7a7dd200b" (UID: "37d641e8-e528-4cff-a83b-21b7a7dd200b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.142546 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d641e8-e528-4cff-a83b-21b7a7dd200b-kube-api-access-zzl79" (OuterVolumeSpecName: "kube-api-access-zzl79") pod "37d641e8-e528-4cff-a83b-21b7a7dd200b" (UID: "37d641e8-e528-4cff-a83b-21b7a7dd200b"). InnerVolumeSpecName "kube-api-access-zzl79". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.192913 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37d641e8-e528-4cff-a83b-21b7a7dd200b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37d641e8-e528-4cff-a83b-21b7a7dd200b" (UID: "37d641e8-e528-4cff-a83b-21b7a7dd200b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.233502 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37d641e8-e528-4cff-a83b-21b7a7dd200b-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.233674 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzl79\" (UniqueName: \"kubernetes.io/projected/37d641e8-e528-4cff-a83b-21b7a7dd200b-kube-api-access-zzl79\") on node \"crc\" DevicePath \"\"" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.233761 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37d641e8-e528-4cff-a83b-21b7a7dd200b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.284349 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpwwf" event={"ID":"532d76f2-810d-4ece-b679-b206fed5b5f7","Type":"ContainerStarted","Data":"d3317f8e3efbdfdde27e7b48ee65d654f015c46b3f0b8f639ee2ead6808f3f44"} Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.288222 4769 generic.go:334] "Generic (PLEG): container finished" podID="37d641e8-e528-4cff-a83b-21b7a7dd200b" containerID="a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029" exitCode=0 Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.288935 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g67x9" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.289354 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g67x9" event={"ID":"37d641e8-e528-4cff-a83b-21b7a7dd200b","Type":"ContainerDied","Data":"a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029"} Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.289392 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g67x9" event={"ID":"37d641e8-e528-4cff-a83b-21b7a7dd200b","Type":"ContainerDied","Data":"6d316c0a1ebdf587cddfe1af44d8666ab643a10e6f5f03406e7013e65db571b9"} Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.289413 4769 scope.go:117] "RemoveContainer" containerID="a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.317601 4769 scope.go:117] "RemoveContainer" containerID="6dc7dbc10a4ee3286a3f1dcf61256bc02f854036109f45cdfb33378d1aee3a4a" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.336651 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wpwwf" podStartSLOduration=2.315677632 podStartE2EDuration="44.336635541s" podCreationTimestamp="2025-10-06 07:19:14 +0000 UTC" firstStartedPulling="2025-10-06 07:19:15.807293656 +0000 UTC m=+152.331574803" lastFinishedPulling="2025-10-06 07:19:57.828251565 +0000 UTC m=+194.352532712" observedRunningTime="2025-10-06 07:19:58.308883449 +0000 UTC m=+194.833164596" watchObservedRunningTime="2025-10-06 07:19:58.336635541 +0000 UTC m=+194.860916688" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.337078 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g67x9"] Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.340003 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g67x9"] Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.341092 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.341757 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.347855 4769 scope.go:117] "RemoveContainer" containerID="2fe4abd49ff81766a884a8df0fbeb5aa4ec87d7cc602519c1ad18b0fe19e32e6" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.414570 4769 scope.go:117] "RemoveContainer" containerID="a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029" Oct 06 07:19:58 crc kubenswrapper[4769]: E1006 07:19:58.415397 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029\": container with ID starting with a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029 not found: ID does not exist" containerID="a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.415458 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029"} err="failed to get container status \"a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029\": rpc error: code = NotFound desc = could not find container \"a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029\": container with ID starting with a2c716d5d44c171d4a327dcbdebf948e902764284d1b2989f936a533007e8029 not found: ID does not exist" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.415511 4769 scope.go:117] "RemoveContainer" containerID="6dc7dbc10a4ee3286a3f1dcf61256bc02f854036109f45cdfb33378d1aee3a4a" Oct 06 07:19:58 crc kubenswrapper[4769]: E1006 07:19:58.415797 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc7dbc10a4ee3286a3f1dcf61256bc02f854036109f45cdfb33378d1aee3a4a\": container with ID starting with 6dc7dbc10a4ee3286a3f1dcf61256bc02f854036109f45cdfb33378d1aee3a4a not found: ID does not exist" containerID="6dc7dbc10a4ee3286a3f1dcf61256bc02f854036109f45cdfb33378d1aee3a4a" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.415820 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc7dbc10a4ee3286a3f1dcf61256bc02f854036109f45cdfb33378d1aee3a4a"} err="failed to get container status \"6dc7dbc10a4ee3286a3f1dcf61256bc02f854036109f45cdfb33378d1aee3a4a\": rpc error: code = NotFound desc = could not find container \"6dc7dbc10a4ee3286a3f1dcf61256bc02f854036109f45cdfb33378d1aee3a4a\": container with ID starting with 6dc7dbc10a4ee3286a3f1dcf61256bc02f854036109f45cdfb33378d1aee3a4a not found: ID does not exist" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.415834 4769 scope.go:117] "RemoveContainer" containerID="2fe4abd49ff81766a884a8df0fbeb5aa4ec87d7cc602519c1ad18b0fe19e32e6" Oct 06 07:19:58 crc kubenswrapper[4769]: E1006 07:19:58.416062 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe4abd49ff81766a884a8df0fbeb5aa4ec87d7cc602519c1ad18b0fe19e32e6\": container with ID starting with 2fe4abd49ff81766a884a8df0fbeb5aa4ec87d7cc602519c1ad18b0fe19e32e6 not found: ID does not exist" containerID="2fe4abd49ff81766a884a8df0fbeb5aa4ec87d7cc602519c1ad18b0fe19e32e6" Oct 06 07:19:58 crc kubenswrapper[4769]: I1006 07:19:58.416080 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe4abd49ff81766a884a8df0fbeb5aa4ec87d7cc602519c1ad18b0fe19e32e6"} err="failed to get container status \"2fe4abd49ff81766a884a8df0fbeb5aa4ec87d7cc602519c1ad18b0fe19e32e6\": rpc error: code = NotFound desc = could not find container \"2fe4abd49ff81766a884a8df0fbeb5aa4ec87d7cc602519c1ad18b0fe19e32e6\": container with ID starting with 2fe4abd49ff81766a884a8df0fbeb5aa4ec87d7cc602519c1ad18b0fe19e32e6 not found: ID does not exist" Oct 06 07:19:59 crc kubenswrapper[4769]: I1006 07:19:59.295374 4769 generic.go:334] "Generic (PLEG): container finished" podID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" containerID="d2d1d5c1872e49b29c172c5ae1ec4ddf9dd2d4961720ddc962aec74715befd55" exitCode=0 Oct 06 07:19:59 crc kubenswrapper[4769]: I1006 07:19:59.295994 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvt6h" event={"ID":"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808","Type":"ContainerDied","Data":"d2d1d5c1872e49b29c172c5ae1ec4ddf9dd2d4961720ddc962aec74715befd55"} Oct 06 07:20:00 crc kubenswrapper[4769]: I1006 07:20:00.173474 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d641e8-e528-4cff-a83b-21b7a7dd200b" path="/var/lib/kubelet/pods/37d641e8-e528-4cff-a83b-21b7a7dd200b/volumes" Oct 06 07:20:00 crc kubenswrapper[4769]: I1006 07:20:00.303740 4769 generic.go:334] "Generic (PLEG): container finished" podID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" containerID="a73821618ee216f8f5f5f2d6a750bd9afdac302b191c262933eda27277bb392e" exitCode=0 Oct 06 07:20:00 crc kubenswrapper[4769]: I1006 07:20:00.303793 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79hs" event={"ID":"7ae764ed-f939-4eb6-aa83-af373e51ad2f","Type":"ContainerDied","Data":"a73821618ee216f8f5f5f2d6a750bd9afdac302b191c262933eda27277bb392e"} Oct 06 07:20:00 crc kubenswrapper[4769]: I1006 07:20:00.308263 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvt6h" event={"ID":"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808","Type":"ContainerStarted","Data":"dab2ca22cf77d15a06bef0bd14cf1c4bc8ab17ce33b593814c008b887f2b1341"} Oct 06 07:20:00 crc kubenswrapper[4769]: I1006 07:20:00.352993 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hvt6h" podStartSLOduration=3.573944027 podStartE2EDuration="44.352969485s" podCreationTimestamp="2025-10-06 07:19:16 +0000 UTC" firstStartedPulling="2025-10-06 07:19:18.960604023 +0000 UTC m=+155.484885170" lastFinishedPulling="2025-10-06 07:19:59.739629481 +0000 UTC m=+196.263910628" observedRunningTime="2025-10-06 07:20:00.348746737 +0000 UTC m=+196.873027944" watchObservedRunningTime="2025-10-06 07:20:00.352969485 +0000 UTC m=+196.877250632" Oct 06 07:20:01 crc kubenswrapper[4769]: I1006 07:20:01.314868 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79hs" event={"ID":"7ae764ed-f939-4eb6-aa83-af373e51ad2f","Type":"ContainerStarted","Data":"91b0f009fecd646e782a3c4b3f47736c21a91560bb35511c6aaa3cad9c862671"} Oct 06 07:20:01 crc kubenswrapper[4769]: I1006 07:20:01.335016 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q79hs" podStartSLOduration=2.546757326 podStartE2EDuration="45.33500204s" podCreationTimestamp="2025-10-06 07:19:16 +0000 UTC" firstStartedPulling="2025-10-06 07:19:17.906408882 +0000 UTC m=+154.430690029" lastFinishedPulling="2025-10-06 07:20:00.694653596 +0000 UTC m=+197.218934743" observedRunningTime="2025-10-06 07:20:01.332051378 +0000 UTC m=+197.856332525" watchObservedRunningTime="2025-10-06 07:20:01.33500204 +0000 UTC m=+197.859283187" Oct 06 07:20:01 crc kubenswrapper[4769]: I1006 07:20:01.509400 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kn4s5"] Oct 06 07:20:01 crc kubenswrapper[4769]: I1006 07:20:01.839666 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6fdc5"] Oct 06 07:20:01 crc kubenswrapper[4769]: I1006 07:20:01.839876 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6fdc5" podUID="5a369720-ae07-4cc3-8f51-0805ad87ec14" containerName="registry-server" containerID="cri-o://986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398" gracePeriod=2 Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.243923 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.326216 4769 generic.go:334] "Generic (PLEG): container finished" podID="5a369720-ae07-4cc3-8f51-0805ad87ec14" containerID="986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398" exitCode=0 Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.326333 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6fdc5" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.326343 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fdc5" event={"ID":"5a369720-ae07-4cc3-8f51-0805ad87ec14","Type":"ContainerDied","Data":"986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398"} Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.326413 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6fdc5" event={"ID":"5a369720-ae07-4cc3-8f51-0805ad87ec14","Type":"ContainerDied","Data":"352d76e1472d5c7e845b1b8c4aa1ae9d564c49a0a96fb67fb00339db1b5326bd"} Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.326455 4769 scope.go:117] "RemoveContainer" containerID="986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.343714 4769 scope.go:117] "RemoveContainer" containerID="e0d4f38c7ffbc9f2bd0757f31bd13a30d5c2a0b8d0eb43735705615e76acc102" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.372819 4769 scope.go:117] "RemoveContainer" containerID="64535f033b7e97b7f1f33a9ac6c4a5fd00fe6d5c3dbbd37eeb73b1540c7e5fb9" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.389271 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t9tf\" (UniqueName: \"kubernetes.io/projected/5a369720-ae07-4cc3-8f51-0805ad87ec14-kube-api-access-6t9tf\") pod \"5a369720-ae07-4cc3-8f51-0805ad87ec14\" (UID: \"5a369720-ae07-4cc3-8f51-0805ad87ec14\") " Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.389363 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a369720-ae07-4cc3-8f51-0805ad87ec14-utilities\") pod \"5a369720-ae07-4cc3-8f51-0805ad87ec14\" (UID: \"5a369720-ae07-4cc3-8f51-0805ad87ec14\") " Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.390230 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a369720-ae07-4cc3-8f51-0805ad87ec14-utilities" (OuterVolumeSpecName: "utilities") pod "5a369720-ae07-4cc3-8f51-0805ad87ec14" (UID: "5a369720-ae07-4cc3-8f51-0805ad87ec14"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.390756 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a369720-ae07-4cc3-8f51-0805ad87ec14-catalog-content\") pod \"5a369720-ae07-4cc3-8f51-0805ad87ec14\" (UID: \"5a369720-ae07-4cc3-8f51-0805ad87ec14\") " Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.391006 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a369720-ae07-4cc3-8f51-0805ad87ec14-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.395311 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a369720-ae07-4cc3-8f51-0805ad87ec14-kube-api-access-6t9tf" (OuterVolumeSpecName: "kube-api-access-6t9tf") pod "5a369720-ae07-4cc3-8f51-0805ad87ec14" (UID: "5a369720-ae07-4cc3-8f51-0805ad87ec14"). InnerVolumeSpecName "kube-api-access-6t9tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.396574 4769 scope.go:117] "RemoveContainer" containerID="986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398" Oct 06 07:20:02 crc kubenswrapper[4769]: E1006 07:20:02.397065 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398\": container with ID starting with 986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398 not found: ID does not exist" containerID="986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.397123 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398"} err="failed to get container status \"986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398\": rpc error: code = NotFound desc = could not find container \"986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398\": container with ID starting with 986cd555ff4eae0e5cb5779df11c4ffee496ff5988c8e093e2c84f8aed2da398 not found: ID does not exist" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.397168 4769 scope.go:117] "RemoveContainer" containerID="e0d4f38c7ffbc9f2bd0757f31bd13a30d5c2a0b8d0eb43735705615e76acc102" Oct 06 07:20:02 crc kubenswrapper[4769]: E1006 07:20:02.397686 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0d4f38c7ffbc9f2bd0757f31bd13a30d5c2a0b8d0eb43735705615e76acc102\": container with ID starting with e0d4f38c7ffbc9f2bd0757f31bd13a30d5c2a0b8d0eb43735705615e76acc102 not found: ID does not exist" containerID="e0d4f38c7ffbc9f2bd0757f31bd13a30d5c2a0b8d0eb43735705615e76acc102" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.397842 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0d4f38c7ffbc9f2bd0757f31bd13a30d5c2a0b8d0eb43735705615e76acc102"} err="failed to get container status \"e0d4f38c7ffbc9f2bd0757f31bd13a30d5c2a0b8d0eb43735705615e76acc102\": rpc error: code = NotFound desc = could not find container \"e0d4f38c7ffbc9f2bd0757f31bd13a30d5c2a0b8d0eb43735705615e76acc102\": container with ID starting with e0d4f38c7ffbc9f2bd0757f31bd13a30d5c2a0b8d0eb43735705615e76acc102 not found: ID does not exist" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.397915 4769 scope.go:117] "RemoveContainer" containerID="64535f033b7e97b7f1f33a9ac6c4a5fd00fe6d5c3dbbd37eeb73b1540c7e5fb9" Oct 06 07:20:02 crc kubenswrapper[4769]: E1006 07:20:02.398538 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64535f033b7e97b7f1f33a9ac6c4a5fd00fe6d5c3dbbd37eeb73b1540c7e5fb9\": container with ID starting with 64535f033b7e97b7f1f33a9ac6c4a5fd00fe6d5c3dbbd37eeb73b1540c7e5fb9 not found: ID does not exist" containerID="64535f033b7e97b7f1f33a9ac6c4a5fd00fe6d5c3dbbd37eeb73b1540c7e5fb9" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.398594 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64535f033b7e97b7f1f33a9ac6c4a5fd00fe6d5c3dbbd37eeb73b1540c7e5fb9"} err="failed to get container status \"64535f033b7e97b7f1f33a9ac6c4a5fd00fe6d5c3dbbd37eeb73b1540c7e5fb9\": rpc error: code = NotFound desc = could not find container \"64535f033b7e97b7f1f33a9ac6c4a5fd00fe6d5c3dbbd37eeb73b1540c7e5fb9\": container with ID starting with 64535f033b7e97b7f1f33a9ac6c4a5fd00fe6d5c3dbbd37eeb73b1540c7e5fb9 not found: ID does not exist" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.472553 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a369720-ae07-4cc3-8f51-0805ad87ec14-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a369720-ae07-4cc3-8f51-0805ad87ec14" (UID: "5a369720-ae07-4cc3-8f51-0805ad87ec14"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.491703 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a369720-ae07-4cc3-8f51-0805ad87ec14-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.491751 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t9tf\" (UniqueName: \"kubernetes.io/projected/5a369720-ae07-4cc3-8f51-0805ad87ec14-kube-api-access-6t9tf\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.658474 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6fdc5"] Oct 06 07:20:02 crc kubenswrapper[4769]: I1006 07:20:02.663271 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6fdc5"] Oct 06 07:20:04 crc kubenswrapper[4769]: I1006 07:20:04.182950 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a369720-ae07-4cc3-8f51-0805ad87ec14" path="/var/lib/kubelet/pods/5a369720-ae07-4cc3-8f51-0805ad87ec14/volumes" Oct 06 07:20:04 crc kubenswrapper[4769]: I1006 07:20:04.360414 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:20:04 crc kubenswrapper[4769]: I1006 07:20:04.360475 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:20:04 crc kubenswrapper[4769]: I1006 07:20:04.411875 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:20:04 crc kubenswrapper[4769]: I1006 07:20:04.993104 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:20:04 crc kubenswrapper[4769]: I1006 07:20:04.993159 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:20:05 crc kubenswrapper[4769]: I1006 07:20:05.030249 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:20:05 crc kubenswrapper[4769]: I1006 07:20:05.388390 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:20:05 crc kubenswrapper[4769]: I1006 07:20:05.388807 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:20:06 crc kubenswrapper[4769]: I1006 07:20:06.560036 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:20:06 crc kubenswrapper[4769]: I1006 07:20:06.560561 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:20:06 crc kubenswrapper[4769]: I1006 07:20:06.615071 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:20:06 crc kubenswrapper[4769]: I1006 07:20:06.982489 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:20:06 crc kubenswrapper[4769]: I1006 07:20:06.982596 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:20:07 crc kubenswrapper[4769]: I1006 07:20:07.020027 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:20:07 crc kubenswrapper[4769]: I1006 07:20:07.393863 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:20:07 crc kubenswrapper[4769]: I1006 07:20:07.399100 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.242157 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wpwwf"] Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.242567 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wpwwf" podUID="532d76f2-810d-4ece-b679-b206fed5b5f7" containerName="registry-server" containerID="cri-o://d3317f8e3efbdfdde27e7b48ee65d654f015c46b3f0b8f639ee2ead6808f3f44" gracePeriod=2 Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.369256 4769 generic.go:334] "Generic (PLEG): container finished" podID="532d76f2-810d-4ece-b679-b206fed5b5f7" containerID="d3317f8e3efbdfdde27e7b48ee65d654f015c46b3f0b8f639ee2ead6808f3f44" exitCode=0 Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.369336 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpwwf" event={"ID":"532d76f2-810d-4ece-b679-b206fed5b5f7","Type":"ContainerDied","Data":"d3317f8e3efbdfdde27e7b48ee65d654f015c46b3f0b8f639ee2ead6808f3f44"} Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.590805 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.674015 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532d76f2-810d-4ece-b679-b206fed5b5f7-utilities\") pod \"532d76f2-810d-4ece-b679-b206fed5b5f7\" (UID: \"532d76f2-810d-4ece-b679-b206fed5b5f7\") " Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.674079 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532d76f2-810d-4ece-b679-b206fed5b5f7-catalog-content\") pod \"532d76f2-810d-4ece-b679-b206fed5b5f7\" (UID: \"532d76f2-810d-4ece-b679-b206fed5b5f7\") " Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.674119 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkqpd\" (UniqueName: \"kubernetes.io/projected/532d76f2-810d-4ece-b679-b206fed5b5f7-kube-api-access-mkqpd\") pod \"532d76f2-810d-4ece-b679-b206fed5b5f7\" (UID: \"532d76f2-810d-4ece-b679-b206fed5b5f7\") " Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.674724 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/532d76f2-810d-4ece-b679-b206fed5b5f7-utilities" (OuterVolumeSpecName: "utilities") pod "532d76f2-810d-4ece-b679-b206fed5b5f7" (UID: "532d76f2-810d-4ece-b679-b206fed5b5f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.686588 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/532d76f2-810d-4ece-b679-b206fed5b5f7-kube-api-access-mkqpd" (OuterVolumeSpecName: "kube-api-access-mkqpd") pod "532d76f2-810d-4ece-b679-b206fed5b5f7" (UID: "532d76f2-810d-4ece-b679-b206fed5b5f7"). InnerVolumeSpecName "kube-api-access-mkqpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.723913 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/532d76f2-810d-4ece-b679-b206fed5b5f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "532d76f2-810d-4ece-b679-b206fed5b5f7" (UID: "532d76f2-810d-4ece-b679-b206fed5b5f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.775725 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/532d76f2-810d-4ece-b679-b206fed5b5f7-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.775765 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/532d76f2-810d-4ece-b679-b206fed5b5f7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:08 crc kubenswrapper[4769]: I1006 07:20:08.775780 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkqpd\" (UniqueName: \"kubernetes.io/projected/532d76f2-810d-4ece-b679-b206fed5b5f7-kube-api-access-mkqpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:09 crc kubenswrapper[4769]: I1006 07:20:09.376975 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wpwwf" event={"ID":"532d76f2-810d-4ece-b679-b206fed5b5f7","Type":"ContainerDied","Data":"b7a5ef06ec523d0130e080d414b6d71c76a55dc2960b09146c980139a2e342ec"} Oct 06 07:20:09 crc kubenswrapper[4769]: I1006 07:20:09.377053 4769 scope.go:117] "RemoveContainer" containerID="d3317f8e3efbdfdde27e7b48ee65d654f015c46b3f0b8f639ee2ead6808f3f44" Oct 06 07:20:09 crc kubenswrapper[4769]: I1006 07:20:09.377068 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wpwwf" Oct 06 07:20:09 crc kubenswrapper[4769]: I1006 07:20:09.393866 4769 scope.go:117] "RemoveContainer" containerID="5dbe678be5605840f7b63bf71581bd67873a9aaa922da8c847aae2e67a1e915f" Oct 06 07:20:09 crc kubenswrapper[4769]: I1006 07:20:09.405733 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wpwwf"] Oct 06 07:20:09 crc kubenswrapper[4769]: I1006 07:20:09.410092 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wpwwf"] Oct 06 07:20:09 crc kubenswrapper[4769]: I1006 07:20:09.426157 4769 scope.go:117] "RemoveContainer" containerID="7304f88799c782b2a3e3a4adec8ca849ba73f356e3a35d40a6fc7b1e0a5c4806" Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.043938 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvt6h"] Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.044285 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hvt6h" podUID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" containerName="registry-server" containerID="cri-o://dab2ca22cf77d15a06bef0bd14cf1c4bc8ab17ce33b593814c008b887f2b1341" gracePeriod=2 Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.171389 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="532d76f2-810d-4ece-b679-b206fed5b5f7" path="/var/lib/kubelet/pods/532d76f2-810d-4ece-b679-b206fed5b5f7/volumes" Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.383678 4769 generic.go:334] "Generic (PLEG): container finished" podID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" containerID="dab2ca22cf77d15a06bef0bd14cf1c4bc8ab17ce33b593814c008b887f2b1341" exitCode=0 Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.383769 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvt6h" event={"ID":"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808","Type":"ContainerDied","Data":"dab2ca22cf77d15a06bef0bd14cf1c4bc8ab17ce33b593814c008b887f2b1341"} Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.383817 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvt6h" event={"ID":"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808","Type":"ContainerDied","Data":"bafa834de0576091a875d61d868d073c300434e4112b307eb21f35fd66456a85"} Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.383829 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bafa834de0576091a875d61d868d073c300434e4112b307eb21f35fd66456a85" Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.389828 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.497837 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mtdp\" (UniqueName: \"kubernetes.io/projected/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-kube-api-access-4mtdp\") pod \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\" (UID: \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\") " Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.497960 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-catalog-content\") pod \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\" (UID: \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\") " Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.497993 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-utilities\") pod \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\" (UID: \"1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808\") " Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.498795 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-utilities" (OuterVolumeSpecName: "utilities") pod "1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" (UID: "1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.504213 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-kube-api-access-4mtdp" (OuterVolumeSpecName: "kube-api-access-4mtdp") pod "1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" (UID: "1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808"). InnerVolumeSpecName "kube-api-access-4mtdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.525464 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" (UID: "1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.599535 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.599568 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:10 crc kubenswrapper[4769]: I1006 07:20:10.599580 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mtdp\" (UniqueName: \"kubernetes.io/projected/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808-kube-api-access-4mtdp\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:11 crc kubenswrapper[4769]: I1006 07:20:11.389941 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvt6h" Oct 06 07:20:11 crc kubenswrapper[4769]: I1006 07:20:11.421337 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvt6h"] Oct 06 07:20:11 crc kubenswrapper[4769]: I1006 07:20:11.421759 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvt6h"] Oct 06 07:20:12 crc kubenswrapper[4769]: I1006 07:20:12.185352 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" path="/var/lib/kubelet/pods/1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808/volumes" Oct 06 07:20:22 crc kubenswrapper[4769]: I1006 07:20:22.245933 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:20:22 crc kubenswrapper[4769]: I1006 07:20:22.246525 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:20:22 crc kubenswrapper[4769]: I1006 07:20:22.246571 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:20:22 crc kubenswrapper[4769]: I1006 07:20:22.247222 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 07:20:22 crc kubenswrapper[4769]: I1006 07:20:22.247277 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205" gracePeriod=600 Oct 06 07:20:22 crc kubenswrapper[4769]: I1006 07:20:22.447342 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205" exitCode=0 Oct 06 07:20:22 crc kubenswrapper[4769]: I1006 07:20:22.447385 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205"} Oct 06 07:20:23 crc kubenswrapper[4769]: I1006 07:20:23.456472 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"11e59b2b2ebfa1dcc1e964ac9bc1a81efea85c20205e4795427a2b0a5e17c9e6"} Oct 06 07:20:26 crc kubenswrapper[4769]: I1006 07:20:26.531350 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" podUID="91429be1-1ad0-4685-af4a-2184431a1d9f" containerName="oauth-openshift" containerID="cri-o://62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab" gracePeriod=15 Oct 06 07:20:26 crc kubenswrapper[4769]: I1006 07:20:26.548250 4769 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-kn4s5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" start-of-body= Oct 06 07:20:26 crc kubenswrapper[4769]: I1006 07:20:26.548311 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" podUID="91429be1-1ad0-4685-af4a-2184431a1d9f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.31:6443/healthz\": dial tcp 10.217.0.31:6443: connect: connection refused" Oct 06 07:20:26 crc kubenswrapper[4769]: I1006 07:20:26.991960 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032129 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t"] Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032331 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d641e8-e528-4cff-a83b-21b7a7dd200b" containerName="registry-server" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032342 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d641e8-e528-4cff-a83b-21b7a7dd200b" containerName="registry-server" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032354 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" containerName="extract-content" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032360 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" containerName="extract-content" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032370 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" containerName="registry-server" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032376 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" containerName="registry-server" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032383 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d641e8-e528-4cff-a83b-21b7a7dd200b" containerName="extract-utilities" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032388 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d641e8-e528-4cff-a83b-21b7a7dd200b" containerName="extract-utilities" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032438 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532d76f2-810d-4ece-b679-b206fed5b5f7" containerName="extract-utilities" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032448 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="532d76f2-810d-4ece-b679-b206fed5b5f7" containerName="extract-utilities" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032458 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a369720-ae07-4cc3-8f51-0805ad87ec14" containerName="extract-utilities" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032464 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a369720-ae07-4cc3-8f51-0805ad87ec14" containerName="extract-utilities" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032473 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91429be1-1ad0-4685-af4a-2184431a1d9f" containerName="oauth-openshift" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032479 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="91429be1-1ad0-4685-af4a-2184431a1d9f" containerName="oauth-openshift" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032489 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532d76f2-810d-4ece-b679-b206fed5b5f7" containerName="extract-content" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032497 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="532d76f2-810d-4ece-b679-b206fed5b5f7" containerName="extract-content" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032509 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a369720-ae07-4cc3-8f51-0805ad87ec14" containerName="registry-server" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032516 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a369720-ae07-4cc3-8f51-0805ad87ec14" containerName="registry-server" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032528 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" containerName="extract-utilities" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032535 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" containerName="extract-utilities" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032549 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a369720-ae07-4cc3-8f51-0805ad87ec14" containerName="extract-content" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032555 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a369720-ae07-4cc3-8f51-0805ad87ec14" containerName="extract-content" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032565 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8322e4aa-6d41-453d-b3ad-4cd7d6c8962e" containerName="pruner" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032572 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8322e4aa-6d41-453d-b3ad-4cd7d6c8962e" containerName="pruner" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032578 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d641e8-e528-4cff-a83b-21b7a7dd200b" containerName="extract-content" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032584 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d641e8-e528-4cff-a83b-21b7a7dd200b" containerName="extract-content" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.032594 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="532d76f2-810d-4ece-b679-b206fed5b5f7" containerName="registry-server" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032600 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="532d76f2-810d-4ece-b679-b206fed5b5f7" containerName="registry-server" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032694 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8322e4aa-6d41-453d-b3ad-4cd7d6c8962e" containerName="pruner" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032703 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a2d32ea-cdc3-4c1e-b33b-c6d20afbb808" containerName="registry-server" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032712 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="532d76f2-810d-4ece-b679-b206fed5b5f7" containerName="registry-server" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032720 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="91429be1-1ad0-4685-af4a-2184431a1d9f" containerName="oauth-openshift" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032730 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d641e8-e528-4cff-a83b-21b7a7dd200b" containerName="registry-server" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.032739 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a369720-ae07-4cc3-8f51-0805ad87ec14" containerName="registry-server" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.033066 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.054070 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t"] Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113379 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-trusted-ca-bundle\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113417 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91429be1-1ad0-4685-af4a-2184431a1d9f-audit-dir\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113487 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-cliconfig\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113507 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-service-ca\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113544 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-session\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113589 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-ocp-branding-template\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113608 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9428\" (UniqueName: \"kubernetes.io/projected/91429be1-1ad0-4685-af4a-2184431a1d9f-kube-api-access-t9428\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113607 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91429be1-1ad0-4685-af4a-2184431a1d9f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113634 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-router-certs\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113746 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-provider-selection\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113778 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-idp-0-file-data\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113803 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-error\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113822 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-login\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113840 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-audit-policies\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.113865 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-serving-cert\") pod \"91429be1-1ad0-4685-af4a-2184431a1d9f\" (UID: \"91429be1-1ad0-4685-af4a-2184431a1d9f\") " Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114019 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa4dc572-35a5-445b-a9cb-6f3278c1774b-audit-policies\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114098 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa4dc572-35a5-445b-a9cb-6f3278c1774b-audit-dir\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114122 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-user-template-login\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114146 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114162 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-router-certs\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114190 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-service-ca\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114215 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114277 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114313 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114373 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114443 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88zv6\" (UniqueName: \"kubernetes.io/projected/aa4dc572-35a5-445b-a9cb-6f3278c1774b-kube-api-access-88zv6\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114453 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114460 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114473 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114548 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-session\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114585 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-user-template-error\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114638 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114652 4769 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/91429be1-1ad0-4685-af4a-2184431a1d9f-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.114666 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.115700 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.116034 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.120639 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.120713 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.124404 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.125001 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.130813 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91429be1-1ad0-4685-af4a-2184431a1d9f-kube-api-access-t9428" (OuterVolumeSpecName: "kube-api-access-t9428") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "kube-api-access-t9428". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.130856 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.130967 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.131978 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.132098 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "91429be1-1ad0-4685-af4a-2184431a1d9f" (UID: "91429be1-1ad0-4685-af4a-2184431a1d9f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.215641 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88zv6\" (UniqueName: \"kubernetes.io/projected/aa4dc572-35a5-445b-a9cb-6f3278c1774b-kube-api-access-88zv6\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.215719 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.215766 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-session\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.215802 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-user-template-error\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.215854 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa4dc572-35a5-445b-a9cb-6f3278c1774b-audit-policies\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.215909 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa4dc572-35a5-445b-a9cb-6f3278c1774b-audit-dir\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.215943 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-user-template-login\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.215987 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216044 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-router-certs\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216337 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-service-ca\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216477 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216599 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216655 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216754 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216861 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216885 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216907 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9428\" (UniqueName: \"kubernetes.io/projected/91429be1-1ad0-4685-af4a-2184431a1d9f-kube-api-access-t9428\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216929 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216951 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216974 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.216993 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.217011 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.217032 4769 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.217052 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.217074 4769 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/91429be1-1ad0-4685-af4a-2184431a1d9f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.217974 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aa4dc572-35a5-445b-a9cb-6f3278c1774b-audit-policies\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.218311 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aa4dc572-35a5-445b-a9cb-6f3278c1774b-audit-dir\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.218589 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-service-ca\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.218978 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.219761 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-session\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.220169 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.220294 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.221111 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.221314 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.221850 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-user-template-login\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.222490 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-system-router-certs\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.222587 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.223496 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aa4dc572-35a5-445b-a9cb-6f3278c1774b-v4-0-config-user-template-error\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.247374 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88zv6\" (UniqueName: \"kubernetes.io/projected/aa4dc572-35a5-445b-a9cb-6f3278c1774b-kube-api-access-88zv6\") pod \"oauth-openshift-58b6cd7fd8-mqc4t\" (UID: \"aa4dc572-35a5-445b-a9cb-6f3278c1774b\") " pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.352517 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.481747 4769 generic.go:334] "Generic (PLEG): container finished" podID="91429be1-1ad0-4685-af4a-2184431a1d9f" containerID="62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab" exitCode=0 Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.481787 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" event={"ID":"91429be1-1ad0-4685-af4a-2184431a1d9f","Type":"ContainerDied","Data":"62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab"} Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.481814 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" event={"ID":"91429be1-1ad0-4685-af4a-2184431a1d9f","Type":"ContainerDied","Data":"823d99702d0f40217499c28105e5cd51256701f4df43ea48e6b36fa4ce4223b5"} Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.481831 4769 scope.go:117] "RemoveContainer" containerID="62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.481859 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-kn4s5" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.525557 4769 scope.go:117] "RemoveContainer" containerID="62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab" Oct 06 07:20:27 crc kubenswrapper[4769]: E1006 07:20:27.526693 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab\": container with ID starting with 62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab not found: ID does not exist" containerID="62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.527847 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab"} err="failed to get container status \"62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab\": rpc error: code = NotFound desc = could not find container \"62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab\": container with ID starting with 62a58036456cca45aaf39ed1f6f65494392266dd4623b13363fb3c42e6faedab not found: ID does not exist" Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.544367 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kn4s5"] Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.547938 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-kn4s5"] Oct 06 07:20:27 crc kubenswrapper[4769]: I1006 07:20:27.850218 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t"] Oct 06 07:20:28 crc kubenswrapper[4769]: I1006 07:20:28.179007 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91429be1-1ad0-4685-af4a-2184431a1d9f" path="/var/lib/kubelet/pods/91429be1-1ad0-4685-af4a-2184431a1d9f/volumes" Oct 06 07:20:28 crc kubenswrapper[4769]: I1006 07:20:28.490317 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" event={"ID":"aa4dc572-35a5-445b-a9cb-6f3278c1774b","Type":"ContainerStarted","Data":"dd58230f41657061f7e04025554491862631319d8fcf1a23c9b93fa56bdcbe17"} Oct 06 07:20:28 crc kubenswrapper[4769]: I1006 07:20:28.490946 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" event={"ID":"aa4dc572-35a5-445b-a9cb-6f3278c1774b","Type":"ContainerStarted","Data":"c06e3d7c7aa1f9f666d1a801b861d5fb9aa0c45a3d240158bbef0c78e2085d79"} Oct 06 07:20:28 crc kubenswrapper[4769]: I1006 07:20:28.491184 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:28 crc kubenswrapper[4769]: I1006 07:20:28.528346 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" podStartSLOduration=27.528308219 podStartE2EDuration="27.528308219s" podCreationTimestamp="2025-10-06 07:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:20:28.518819965 +0000 UTC m=+225.043101132" watchObservedRunningTime="2025-10-06 07:20:28.528308219 +0000 UTC m=+225.052589396" Oct 06 07:20:28 crc kubenswrapper[4769]: I1006 07:20:28.936654 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-58b6cd7fd8-mqc4t" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.147051 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tgg5p"] Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.148127 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tgg5p" podUID="32abbcb8-969c-48c1-a041-c8dca1d04e24" containerName="registry-server" containerID="cri-o://ad59df40b08afb7c16ab4d6a6e08f78c25a66b4f9dc45cc1058db078ce8d5ce3" gracePeriod=30 Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.154761 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ckl4"] Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.154981 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7ckl4" podUID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" containerName="registry-server" containerID="cri-o://be75c1899233aa1372f18bda05d24297adcbe9865c8886c892d0bba4618dad35" gracePeriod=30 Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.162849 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jm2xh"] Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.163065 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" podUID="dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa" containerName="marketplace-operator" containerID="cri-o://44ded000e408a808794358baf6c59192039ab2b39297b20a4ab81d9115163b08" gracePeriod=30 Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.182808 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dccdw"] Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.183724 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.187302 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q79hs"] Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.187608 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q79hs" podUID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" containerName="registry-server" containerID="cri-o://91b0f009fecd646e782a3c4b3f47736c21a91560bb35511c6aaa3cad9c862671" gracePeriod=30 Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.200595 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dt4qr"] Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.200890 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dt4qr" podUID="112b0f4b-079f-4103-83cd-25d7c215c3a9" containerName="registry-server" containerID="cri-o://93588d9054000012b49794283a9eac88b5012e86f2c3f5b0e45ca8acb4a87c3f" gracePeriod=30 Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.207639 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dccdw"] Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.298654 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnjl8\" (UniqueName: \"kubernetes.io/projected/42a92800-c31c-405a-acd7-5c33fcb1aa05-kube-api-access-nnjl8\") pod \"marketplace-operator-79b997595-dccdw\" (UID: \"42a92800-c31c-405a-acd7-5c33fcb1aa05\") " pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.298822 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42a92800-c31c-405a-acd7-5c33fcb1aa05-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dccdw\" (UID: \"42a92800-c31c-405a-acd7-5c33fcb1aa05\") " pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.298896 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42a92800-c31c-405a-acd7-5c33fcb1aa05-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dccdw\" (UID: \"42a92800-c31c-405a-acd7-5c33fcb1aa05\") " pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.400163 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42a92800-c31c-405a-acd7-5c33fcb1aa05-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dccdw\" (UID: \"42a92800-c31c-405a-acd7-5c33fcb1aa05\") " pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.400226 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnjl8\" (UniqueName: \"kubernetes.io/projected/42a92800-c31c-405a-acd7-5c33fcb1aa05-kube-api-access-nnjl8\") pod \"marketplace-operator-79b997595-dccdw\" (UID: \"42a92800-c31c-405a-acd7-5c33fcb1aa05\") " pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.400279 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42a92800-c31c-405a-acd7-5c33fcb1aa05-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dccdw\" (UID: \"42a92800-c31c-405a-acd7-5c33fcb1aa05\") " pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.402805 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42a92800-c31c-405a-acd7-5c33fcb1aa05-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dccdw\" (UID: \"42a92800-c31c-405a-acd7-5c33fcb1aa05\") " pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.411132 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/42a92800-c31c-405a-acd7-5c33fcb1aa05-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dccdw\" (UID: \"42a92800-c31c-405a-acd7-5c33fcb1aa05\") " pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.418984 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnjl8\" (UniqueName: \"kubernetes.io/projected/42a92800-c31c-405a-acd7-5c33fcb1aa05-kube-api-access-nnjl8\") pod \"marketplace-operator-79b997595-dccdw\" (UID: \"42a92800-c31c-405a-acd7-5c33fcb1aa05\") " pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.502248 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.565224 4769 generic.go:334] "Generic (PLEG): container finished" podID="112b0f4b-079f-4103-83cd-25d7c215c3a9" containerID="93588d9054000012b49794283a9eac88b5012e86f2c3f5b0e45ca8acb4a87c3f" exitCode=0 Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.565326 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt4qr" event={"ID":"112b0f4b-079f-4103-83cd-25d7c215c3a9","Type":"ContainerDied","Data":"93588d9054000012b49794283a9eac88b5012e86f2c3f5b0e45ca8acb4a87c3f"} Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.569384 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79hs" event={"ID":"7ae764ed-f939-4eb6-aa83-af373e51ad2f","Type":"ContainerDied","Data":"91b0f009fecd646e782a3c4b3f47736c21a91560bb35511c6aaa3cad9c862671"} Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.569954 4769 generic.go:334] "Generic (PLEG): container finished" podID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" containerID="91b0f009fecd646e782a3c4b3f47736c21a91560bb35511c6aaa3cad9c862671" exitCode=0 Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.571793 4769 generic.go:334] "Generic (PLEG): container finished" podID="dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa" containerID="44ded000e408a808794358baf6c59192039ab2b39297b20a4ab81d9115163b08" exitCode=0 Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.571869 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" event={"ID":"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa","Type":"ContainerDied","Data":"44ded000e408a808794358baf6c59192039ab2b39297b20a4ab81d9115163b08"} Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.571918 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" event={"ID":"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa","Type":"ContainerDied","Data":"b745eb3417fcd7d8829ef5f36015e71ed1258f95e76254f53fbe25c3baab486e"} Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.571930 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b745eb3417fcd7d8829ef5f36015e71ed1258f95e76254f53fbe25c3baab486e" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.581765 4769 generic.go:334] "Generic (PLEG): container finished" podID="32abbcb8-969c-48c1-a041-c8dca1d04e24" containerID="ad59df40b08afb7c16ab4d6a6e08f78c25a66b4f9dc45cc1058db078ce8d5ce3" exitCode=0 Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.581831 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgg5p" event={"ID":"32abbcb8-969c-48c1-a041-c8dca1d04e24","Type":"ContainerDied","Data":"ad59df40b08afb7c16ab4d6a6e08f78c25a66b4f9dc45cc1058db078ce8d5ce3"} Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.593375 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.595919 4769 generic.go:334] "Generic (PLEG): container finished" podID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" containerID="be75c1899233aa1372f18bda05d24297adcbe9865c8886c892d0bba4618dad35" exitCode=0 Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.596026 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ckl4" event={"ID":"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8","Type":"ContainerDied","Data":"be75c1899233aa1372f18bda05d24297adcbe9865c8886c892d0bba4618dad35"} Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.684349 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.689694 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.700215 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.702673 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-marketplace-operator-metrics\") pod \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\" (UID: \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.702742 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-marketplace-trusted-ca\") pod \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\" (UID: \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.702775 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5xd4\" (UniqueName: \"kubernetes.io/projected/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-kube-api-access-r5xd4\") pod \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\" (UID: \"dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.704407 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa" (UID: "dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.711039 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-kube-api-access-r5xd4" (OuterVolumeSpecName: "kube-api-access-r5xd4") pod "dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa" (UID: "dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa"). InnerVolumeSpecName "kube-api-access-r5xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.712621 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa" (UID: "dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.712793 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.803376 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-utilities\") pod \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\" (UID: \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.804716 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-utilities" (OuterVolumeSpecName: "utilities") pod "4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" (UID: "4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.804832 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-catalog-content\") pod \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\" (UID: \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.804861 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsfdd\" (UniqueName: \"kubernetes.io/projected/7ae764ed-f939-4eb6-aa83-af373e51ad2f-kube-api-access-qsfdd\") pod \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\" (UID: \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.807925 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae764ed-f939-4eb6-aa83-af373e51ad2f-kube-api-access-qsfdd" (OuterVolumeSpecName: "kube-api-access-qsfdd") pod "7ae764ed-f939-4eb6-aa83-af373e51ad2f" (UID: "7ae764ed-f939-4eb6-aa83-af373e51ad2f"). InnerVolumeSpecName "kube-api-access-qsfdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.808592 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae764ed-f939-4eb6-aa83-af373e51ad2f-utilities\") pod \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\" (UID: \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.808710 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6zm5\" (UniqueName: \"kubernetes.io/projected/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-kube-api-access-r6zm5\") pod \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\" (UID: \"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809127 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae764ed-f939-4eb6-aa83-af373e51ad2f-catalog-content\") pod \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\" (UID: \"7ae764ed-f939-4eb6-aa83-af373e51ad2f\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809174 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112b0f4b-079f-4103-83cd-25d7c215c3a9-utilities\") pod \"112b0f4b-079f-4103-83cd-25d7c215c3a9\" (UID: \"112b0f4b-079f-4103-83cd-25d7c215c3a9\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809192 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32abbcb8-969c-48c1-a041-c8dca1d04e24-catalog-content\") pod \"32abbcb8-969c-48c1-a041-c8dca1d04e24\" (UID: \"32abbcb8-969c-48c1-a041-c8dca1d04e24\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809216 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112b0f4b-079f-4103-83cd-25d7c215c3a9-catalog-content\") pod \"112b0f4b-079f-4103-83cd-25d7c215c3a9\" (UID: \"112b0f4b-079f-4103-83cd-25d7c215c3a9\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809241 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32abbcb8-969c-48c1-a041-c8dca1d04e24-utilities\") pod \"32abbcb8-969c-48c1-a041-c8dca1d04e24\" (UID: \"32abbcb8-969c-48c1-a041-c8dca1d04e24\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809273 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae764ed-f939-4eb6-aa83-af373e51ad2f-utilities" (OuterVolumeSpecName: "utilities") pod "7ae764ed-f939-4eb6-aa83-af373e51ad2f" (UID: "7ae764ed-f939-4eb6-aa83-af373e51ad2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809311 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktzgf\" (UniqueName: \"kubernetes.io/projected/112b0f4b-079f-4103-83cd-25d7c215c3a9-kube-api-access-ktzgf\") pod \"112b0f4b-079f-4103-83cd-25d7c215c3a9\" (UID: \"112b0f4b-079f-4103-83cd-25d7c215c3a9\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809348 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrrc8\" (UniqueName: \"kubernetes.io/projected/32abbcb8-969c-48c1-a041-c8dca1d04e24-kube-api-access-vrrc8\") pod \"32abbcb8-969c-48c1-a041-c8dca1d04e24\" (UID: \"32abbcb8-969c-48c1-a041-c8dca1d04e24\") " Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809754 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsfdd\" (UniqueName: \"kubernetes.io/projected/7ae764ed-f939-4eb6-aa83-af373e51ad2f-kube-api-access-qsfdd\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809768 4769 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809779 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ae764ed-f939-4eb6-aa83-af373e51ad2f-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809789 4769 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809797 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5xd4\" (UniqueName: \"kubernetes.io/projected/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa-kube-api-access-r5xd4\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.809806 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.810105 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/112b0f4b-079f-4103-83cd-25d7c215c3a9-utilities" (OuterVolumeSpecName: "utilities") pod "112b0f4b-079f-4103-83cd-25d7c215c3a9" (UID: "112b0f4b-079f-4103-83cd-25d7c215c3a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.811006 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-kube-api-access-r6zm5" (OuterVolumeSpecName: "kube-api-access-r6zm5") pod "4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" (UID: "4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8"). InnerVolumeSpecName "kube-api-access-r6zm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.811650 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32abbcb8-969c-48c1-a041-c8dca1d04e24-utilities" (OuterVolumeSpecName: "utilities") pod "32abbcb8-969c-48c1-a041-c8dca1d04e24" (UID: "32abbcb8-969c-48c1-a041-c8dca1d04e24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.812181 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32abbcb8-969c-48c1-a041-c8dca1d04e24-kube-api-access-vrrc8" (OuterVolumeSpecName: "kube-api-access-vrrc8") pod "32abbcb8-969c-48c1-a041-c8dca1d04e24" (UID: "32abbcb8-969c-48c1-a041-c8dca1d04e24"). InnerVolumeSpecName "kube-api-access-vrrc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.814866 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112b0f4b-079f-4103-83cd-25d7c215c3a9-kube-api-access-ktzgf" (OuterVolumeSpecName: "kube-api-access-ktzgf") pod "112b0f4b-079f-4103-83cd-25d7c215c3a9" (UID: "112b0f4b-079f-4103-83cd-25d7c215c3a9"). InnerVolumeSpecName "kube-api-access-ktzgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.859337 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" (UID: "4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.861281 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ae764ed-f939-4eb6-aa83-af373e51ad2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ae764ed-f939-4eb6-aa83-af373e51ad2f" (UID: "7ae764ed-f939-4eb6-aa83-af373e51ad2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.890689 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32abbcb8-969c-48c1-a041-c8dca1d04e24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32abbcb8-969c-48c1-a041-c8dca1d04e24" (UID: "32abbcb8-969c-48c1-a041-c8dca1d04e24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.900340 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/112b0f4b-079f-4103-83cd-25d7c215c3a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "112b0f4b-079f-4103-83cd-25d7c215c3a9" (UID: "112b0f4b-079f-4103-83cd-25d7c215c3a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.911110 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktzgf\" (UniqueName: \"kubernetes.io/projected/112b0f4b-079f-4103-83cd-25d7c215c3a9-kube-api-access-ktzgf\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.911151 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrrc8\" (UniqueName: \"kubernetes.io/projected/32abbcb8-969c-48c1-a041-c8dca1d04e24-kube-api-access-vrrc8\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.911162 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.911170 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6zm5\" (UniqueName: \"kubernetes.io/projected/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8-kube-api-access-r6zm5\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.911179 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ae764ed-f939-4eb6-aa83-af373e51ad2f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.911188 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112b0f4b-079f-4103-83cd-25d7c215c3a9-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.911197 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32abbcb8-969c-48c1-a041-c8dca1d04e24-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.911206 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112b0f4b-079f-4103-83cd-25d7c215c3a9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.911214 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32abbcb8-969c-48c1-a041-c8dca1d04e24-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:20:40 crc kubenswrapper[4769]: I1006 07:20:40.968582 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dccdw"] Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.602894 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ckl4" event={"ID":"4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8","Type":"ContainerDied","Data":"a3a55b795a5a90556931959b2ad9e1f1d989cec94c2dc86d3d642780d85d0bdd"} Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.602947 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ckl4" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.602957 4769 scope.go:117] "RemoveContainer" containerID="be75c1899233aa1372f18bda05d24297adcbe9865c8886c892d0bba4618dad35" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.606580 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dt4qr" event={"ID":"112b0f4b-079f-4103-83cd-25d7c215c3a9","Type":"ContainerDied","Data":"1fb5cbf9741596a835c3efccef3690520cfdffd943f8d7a12c670fd98dac063b"} Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.606622 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dt4qr" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.609502 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" event={"ID":"42a92800-c31c-405a-acd7-5c33fcb1aa05","Type":"ContainerStarted","Data":"fc3d8dca0067d315e7f3ea0309948cb851a1e0746133ada33650c8652b1b36cb"} Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.609549 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" event={"ID":"42a92800-c31c-405a-acd7-5c33fcb1aa05","Type":"ContainerStarted","Data":"b0584828e68e24bb6ab27990fc65a327f75f30b5bb1652e410fa3289a88b4644"} Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.609863 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.611704 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q79hs" event={"ID":"7ae764ed-f939-4eb6-aa83-af373e51ad2f","Type":"ContainerDied","Data":"229eea96e260687b8294035f7fd9c61cb7dfcefaf38d6be708322cb39def4b8f"} Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.611803 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q79hs" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.613373 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.614784 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgg5p" event={"ID":"32abbcb8-969c-48c1-a041-c8dca1d04e24","Type":"ContainerDied","Data":"f63e0d8acfce4fb44f5ba0ff05bbe52d183d6b178f6c5a17d5d1ef5f27dc5d2e"} Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.614821 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jm2xh" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.614842 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgg5p" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.622565 4769 scope.go:117] "RemoveContainer" containerID="a53c46f60dcddbcd9504fdbd536c62aa28ebabca0acd51324dd4fdeeec557eb5" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.630462 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dccdw" podStartSLOduration=1.630442982 podStartE2EDuration="1.630442982s" podCreationTimestamp="2025-10-06 07:20:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:20:41.628775136 +0000 UTC m=+238.153056283" watchObservedRunningTime="2025-10-06 07:20:41.630442982 +0000 UTC m=+238.154724119" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.651459 4769 scope.go:117] "RemoveContainer" containerID="312f7b86135d5584ce03aafe797dc7f0179c41d3e5f0882dc81d9dc14b13619e" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.663909 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ckl4"] Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.667263 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7ckl4"] Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.677531 4769 scope.go:117] "RemoveContainer" containerID="93588d9054000012b49794283a9eac88b5012e86f2c3f5b0e45ca8acb4a87c3f" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.701373 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tgg5p"] Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.708431 4769 scope.go:117] "RemoveContainer" containerID="b0aad1db340febf10d57500b980b071c883a9f3bc6fc03090fd874c89ee17d3a" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.708560 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tgg5p"] Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.713476 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jm2xh"] Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.720034 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jm2xh"] Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.730714 4769 scope.go:117] "RemoveContainer" containerID="c7e9c5f7aea2183e061da8fa97842109398e916cbdb98277de1ac3bccf5f7a55" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.734490 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dt4qr"] Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.737818 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dt4qr"] Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.739995 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q79hs"] Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.742179 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q79hs"] Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.742372 4769 scope.go:117] "RemoveContainer" containerID="91b0f009fecd646e782a3c4b3f47736c21a91560bb35511c6aaa3cad9c862671" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.756443 4769 scope.go:117] "RemoveContainer" containerID="a73821618ee216f8f5f5f2d6a750bd9afdac302b191c262933eda27277bb392e" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.771878 4769 scope.go:117] "RemoveContainer" containerID="0f89f7c4ec4da046c50cbdb58978f1dac27353d1fd90c4061a8f4246bb30c9b0" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.783299 4769 scope.go:117] "RemoveContainer" containerID="ad59df40b08afb7c16ab4d6a6e08f78c25a66b4f9dc45cc1058db078ce8d5ce3" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.796564 4769 scope.go:117] "RemoveContainer" containerID="b7c33eadbdadf0afd8505e98a0bc11ccc813dbf814fda186f8d984b1bbd7a9e2" Oct 06 07:20:41 crc kubenswrapper[4769]: I1006 07:20:41.811220 4769 scope.go:117] "RemoveContainer" containerID="ff055f9c0d2765d56b58e13581bac680d5bccccc6a7a2f7938207ee14a3f3848" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.173244 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112b0f4b-079f-4103-83cd-25d7c215c3a9" path="/var/lib/kubelet/pods/112b0f4b-079f-4103-83cd-25d7c215c3a9/volumes" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.174127 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32abbcb8-969c-48c1-a041-c8dca1d04e24" path="/var/lib/kubelet/pods/32abbcb8-969c-48c1-a041-c8dca1d04e24/volumes" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.174909 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" path="/var/lib/kubelet/pods/4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8/volumes" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.176298 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" path="/var/lib/kubelet/pods/7ae764ed-f939-4eb6-aa83-af373e51ad2f/volumes" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.177169 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa" path="/var/lib/kubelet/pods/dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa/volumes" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.366741 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-glqgj"] Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367017 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" containerName="extract-content" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367034 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" containerName="extract-content" Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367046 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32abbcb8-969c-48c1-a041-c8dca1d04e24" containerName="registry-server" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367057 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="32abbcb8-969c-48c1-a041-c8dca1d04e24" containerName="registry-server" Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367069 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112b0f4b-079f-4103-83cd-25d7c215c3a9" containerName="extract-utilities" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367078 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="112b0f4b-079f-4103-83cd-25d7c215c3a9" containerName="extract-utilities" Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367089 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112b0f4b-079f-4103-83cd-25d7c215c3a9" containerName="registry-server" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367098 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="112b0f4b-079f-4103-83cd-25d7c215c3a9" containerName="registry-server" Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367107 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112b0f4b-079f-4103-83cd-25d7c215c3a9" containerName="extract-content" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367115 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="112b0f4b-079f-4103-83cd-25d7c215c3a9" containerName="extract-content" Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367128 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32abbcb8-969c-48c1-a041-c8dca1d04e24" containerName="extract-utilities" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367136 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="32abbcb8-969c-48c1-a041-c8dca1d04e24" containerName="extract-utilities" Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367146 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32abbcb8-969c-48c1-a041-c8dca1d04e24" containerName="extract-content" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367154 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="32abbcb8-969c-48c1-a041-c8dca1d04e24" containerName="extract-content" Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367171 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" containerName="extract-utilities" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367179 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" containerName="extract-utilities" Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367195 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" containerName="extract-content" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367204 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" containerName="extract-content" Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367215 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" containerName="registry-server" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367223 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" containerName="registry-server" Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367237 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa" containerName="marketplace-operator" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367246 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa" containerName="marketplace-operator" Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367259 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" containerName="registry-server" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367267 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" containerName="registry-server" Oct 06 07:20:42 crc kubenswrapper[4769]: E1006 07:20:42.367278 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" containerName="extract-utilities" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367286 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" containerName="extract-utilities" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367397 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="32abbcb8-969c-48c1-a041-c8dca1d04e24" containerName="registry-server" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367410 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae764ed-f939-4eb6-aa83-af373e51ad2f" containerName="registry-server" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367443 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee769cc-e37f-4ba6-80eb-c7f0cf55dee8" containerName="registry-server" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367455 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde44a54-2b70-4ea6-8ed4-69f0ff7f25aa" containerName="marketplace-operator" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.367472 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="112b0f4b-079f-4103-83cd-25d7c215c3a9" containerName="registry-server" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.369233 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.371330 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-glqgj"] Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.371994 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.427824 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klsr7\" (UniqueName: \"kubernetes.io/projected/9cfd3cb7-51a3-4926-ad98-533e3285dea9-kube-api-access-klsr7\") pod \"certified-operators-glqgj\" (UID: \"9cfd3cb7-51a3-4926-ad98-533e3285dea9\") " pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.427880 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cfd3cb7-51a3-4926-ad98-533e3285dea9-utilities\") pod \"certified-operators-glqgj\" (UID: \"9cfd3cb7-51a3-4926-ad98-533e3285dea9\") " pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.427914 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cfd3cb7-51a3-4926-ad98-533e3285dea9-catalog-content\") pod \"certified-operators-glqgj\" (UID: \"9cfd3cb7-51a3-4926-ad98-533e3285dea9\") " pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.529281 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klsr7\" (UniqueName: \"kubernetes.io/projected/9cfd3cb7-51a3-4926-ad98-533e3285dea9-kube-api-access-klsr7\") pod \"certified-operators-glqgj\" (UID: \"9cfd3cb7-51a3-4926-ad98-533e3285dea9\") " pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.529335 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cfd3cb7-51a3-4926-ad98-533e3285dea9-utilities\") pod \"certified-operators-glqgj\" (UID: \"9cfd3cb7-51a3-4926-ad98-533e3285dea9\") " pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.529363 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cfd3cb7-51a3-4926-ad98-533e3285dea9-catalog-content\") pod \"certified-operators-glqgj\" (UID: \"9cfd3cb7-51a3-4926-ad98-533e3285dea9\") " pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.529844 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cfd3cb7-51a3-4926-ad98-533e3285dea9-catalog-content\") pod \"certified-operators-glqgj\" (UID: \"9cfd3cb7-51a3-4926-ad98-533e3285dea9\") " pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.529932 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cfd3cb7-51a3-4926-ad98-533e3285dea9-utilities\") pod \"certified-operators-glqgj\" (UID: \"9cfd3cb7-51a3-4926-ad98-533e3285dea9\") " pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.549773 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klsr7\" (UniqueName: \"kubernetes.io/projected/9cfd3cb7-51a3-4926-ad98-533e3285dea9-kube-api-access-klsr7\") pod \"certified-operators-glqgj\" (UID: \"9cfd3cb7-51a3-4926-ad98-533e3285dea9\") " pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.600150 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ttqgb"] Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.601290 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.608474 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.612337 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttqgb"] Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.698500 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.732670 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5d5\" (UniqueName: \"kubernetes.io/projected/931992f0-da1a-4ed5-80fa-268fe083c2d8-kube-api-access-pn5d5\") pod \"community-operators-ttqgb\" (UID: \"931992f0-da1a-4ed5-80fa-268fe083c2d8\") " pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.732719 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931992f0-da1a-4ed5-80fa-268fe083c2d8-catalog-content\") pod \"community-operators-ttqgb\" (UID: \"931992f0-da1a-4ed5-80fa-268fe083c2d8\") " pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.732806 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931992f0-da1a-4ed5-80fa-268fe083c2d8-utilities\") pod \"community-operators-ttqgb\" (UID: \"931992f0-da1a-4ed5-80fa-268fe083c2d8\") " pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.833577 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn5d5\" (UniqueName: \"kubernetes.io/projected/931992f0-da1a-4ed5-80fa-268fe083c2d8-kube-api-access-pn5d5\") pod \"community-operators-ttqgb\" (UID: \"931992f0-da1a-4ed5-80fa-268fe083c2d8\") " pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.833951 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931992f0-da1a-4ed5-80fa-268fe083c2d8-catalog-content\") pod \"community-operators-ttqgb\" (UID: \"931992f0-da1a-4ed5-80fa-268fe083c2d8\") " pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.833992 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931992f0-da1a-4ed5-80fa-268fe083c2d8-utilities\") pod \"community-operators-ttqgb\" (UID: \"931992f0-da1a-4ed5-80fa-268fe083c2d8\") " pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.834398 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931992f0-da1a-4ed5-80fa-268fe083c2d8-utilities\") pod \"community-operators-ttqgb\" (UID: \"931992f0-da1a-4ed5-80fa-268fe083c2d8\") " pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.834984 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931992f0-da1a-4ed5-80fa-268fe083c2d8-catalog-content\") pod \"community-operators-ttqgb\" (UID: \"931992f0-da1a-4ed5-80fa-268fe083c2d8\") " pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.852066 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn5d5\" (UniqueName: \"kubernetes.io/projected/931992f0-da1a-4ed5-80fa-268fe083c2d8-kube-api-access-pn5d5\") pod \"community-operators-ttqgb\" (UID: \"931992f0-da1a-4ed5-80fa-268fe083c2d8\") " pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:42 crc kubenswrapper[4769]: I1006 07:20:42.931188 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:43 crc kubenswrapper[4769]: I1006 07:20:43.072091 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-glqgj"] Oct 06 07:20:43 crc kubenswrapper[4769]: W1006 07:20:43.078913 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cfd3cb7_51a3_4926_ad98_533e3285dea9.slice/crio-cea39b63d101b05fccbc78c46d57abf7d48c744a350f2a9d5bcdc4726bdb483c WatchSource:0}: Error finding container cea39b63d101b05fccbc78c46d57abf7d48c744a350f2a9d5bcdc4726bdb483c: Status 404 returned error can't find the container with id cea39b63d101b05fccbc78c46d57abf7d48c744a350f2a9d5bcdc4726bdb483c Oct 06 07:20:43 crc kubenswrapper[4769]: I1006 07:20:43.290285 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttqgb"] Oct 06 07:20:43 crc kubenswrapper[4769]: W1006 07:20:43.302980 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod931992f0_da1a_4ed5_80fa_268fe083c2d8.slice/crio-504d11937a2857bbc8e154b137ee85348075e45397687f6065553f5abc12e62a WatchSource:0}: Error finding container 504d11937a2857bbc8e154b137ee85348075e45397687f6065553f5abc12e62a: Status 404 returned error can't find the container with id 504d11937a2857bbc8e154b137ee85348075e45397687f6065553f5abc12e62a Oct 06 07:20:43 crc kubenswrapper[4769]: I1006 07:20:43.645544 4769 generic.go:334] "Generic (PLEG): container finished" podID="9cfd3cb7-51a3-4926-ad98-533e3285dea9" containerID="170cdaacd7d4874f907fbd823b91388ee20a5e8a7cc892de9010e1d64a6217c0" exitCode=0 Oct 06 07:20:43 crc kubenswrapper[4769]: I1006 07:20:43.645592 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glqgj" event={"ID":"9cfd3cb7-51a3-4926-ad98-533e3285dea9","Type":"ContainerDied","Data":"170cdaacd7d4874f907fbd823b91388ee20a5e8a7cc892de9010e1d64a6217c0"} Oct 06 07:20:43 crc kubenswrapper[4769]: I1006 07:20:43.645637 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glqgj" event={"ID":"9cfd3cb7-51a3-4926-ad98-533e3285dea9","Type":"ContainerStarted","Data":"cea39b63d101b05fccbc78c46d57abf7d48c744a350f2a9d5bcdc4726bdb483c"} Oct 06 07:20:43 crc kubenswrapper[4769]: I1006 07:20:43.647391 4769 generic.go:334] "Generic (PLEG): container finished" podID="931992f0-da1a-4ed5-80fa-268fe083c2d8" containerID="38aba12d0444453140dac888acb7223ece70b16201ea10d1fc2d4b702e20cdc6" exitCode=0 Oct 06 07:20:43 crc kubenswrapper[4769]: I1006 07:20:43.647776 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttqgb" event={"ID":"931992f0-da1a-4ed5-80fa-268fe083c2d8","Type":"ContainerDied","Data":"38aba12d0444453140dac888acb7223ece70b16201ea10d1fc2d4b702e20cdc6"} Oct 06 07:20:43 crc kubenswrapper[4769]: I1006 07:20:43.647820 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttqgb" event={"ID":"931992f0-da1a-4ed5-80fa-268fe083c2d8","Type":"ContainerStarted","Data":"504d11937a2857bbc8e154b137ee85348075e45397687f6065553f5abc12e62a"} Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.655396 4769 generic.go:334] "Generic (PLEG): container finished" podID="931992f0-da1a-4ed5-80fa-268fe083c2d8" containerID="7c064873e5affebbe69ad3cad978142d767282c12faa193ddfceaf9ef33af497" exitCode=0 Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.655468 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttqgb" event={"ID":"931992f0-da1a-4ed5-80fa-268fe083c2d8","Type":"ContainerDied","Data":"7c064873e5affebbe69ad3cad978142d767282c12faa193ddfceaf9ef33af497"} Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.660465 4769 generic.go:334] "Generic (PLEG): container finished" podID="9cfd3cb7-51a3-4926-ad98-533e3285dea9" containerID="258ca3e3a95d610bd8da30d53c4a631fa476e19af757b45b74d6aa541ed426ea" exitCode=0 Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.660505 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glqgj" event={"ID":"9cfd3cb7-51a3-4926-ad98-533e3285dea9","Type":"ContainerDied","Data":"258ca3e3a95d610bd8da30d53c4a631fa476e19af757b45b74d6aa541ed426ea"} Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.765494 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vr9gm"] Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.766775 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.768809 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.769729 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vr9gm"] Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.857054 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0efcfec-1046-4b55-8fed-8f271f6a9d99-utilities\") pod \"redhat-marketplace-vr9gm\" (UID: \"a0efcfec-1046-4b55-8fed-8f271f6a9d99\") " pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.857159 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxj62\" (UniqueName: \"kubernetes.io/projected/a0efcfec-1046-4b55-8fed-8f271f6a9d99-kube-api-access-xxj62\") pod \"redhat-marketplace-vr9gm\" (UID: \"a0efcfec-1046-4b55-8fed-8f271f6a9d99\") " pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.857256 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0efcfec-1046-4b55-8fed-8f271f6a9d99-catalog-content\") pod \"redhat-marketplace-vr9gm\" (UID: \"a0efcfec-1046-4b55-8fed-8f271f6a9d99\") " pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.960254 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxj62\" (UniqueName: \"kubernetes.io/projected/a0efcfec-1046-4b55-8fed-8f271f6a9d99-kube-api-access-xxj62\") pod \"redhat-marketplace-vr9gm\" (UID: \"a0efcfec-1046-4b55-8fed-8f271f6a9d99\") " pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.960315 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0efcfec-1046-4b55-8fed-8f271f6a9d99-catalog-content\") pod \"redhat-marketplace-vr9gm\" (UID: \"a0efcfec-1046-4b55-8fed-8f271f6a9d99\") " pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.960340 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0efcfec-1046-4b55-8fed-8f271f6a9d99-utilities\") pod \"redhat-marketplace-vr9gm\" (UID: \"a0efcfec-1046-4b55-8fed-8f271f6a9d99\") " pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.960870 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0efcfec-1046-4b55-8fed-8f271f6a9d99-utilities\") pod \"redhat-marketplace-vr9gm\" (UID: \"a0efcfec-1046-4b55-8fed-8f271f6a9d99\") " pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.962492 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0efcfec-1046-4b55-8fed-8f271f6a9d99-catalog-content\") pod \"redhat-marketplace-vr9gm\" (UID: \"a0efcfec-1046-4b55-8fed-8f271f6a9d99\") " pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.985189 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mdvtg"] Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.986309 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.988267 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 06 07:20:44 crc kubenswrapper[4769]: I1006 07:20:44.990340 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxj62\" (UniqueName: \"kubernetes.io/projected/a0efcfec-1046-4b55-8fed-8f271f6a9d99-kube-api-access-xxj62\") pod \"redhat-marketplace-vr9gm\" (UID: \"a0efcfec-1046-4b55-8fed-8f271f6a9d99\") " pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.008393 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdvtg"] Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.061264 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpxsz\" (UniqueName: \"kubernetes.io/projected/7428fd04-528c-4403-8f36-227127f6ee19-kube-api-access-bpxsz\") pod \"redhat-operators-mdvtg\" (UID: \"7428fd04-528c-4403-8f36-227127f6ee19\") " pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.061349 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7428fd04-528c-4403-8f36-227127f6ee19-utilities\") pod \"redhat-operators-mdvtg\" (UID: \"7428fd04-528c-4403-8f36-227127f6ee19\") " pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.061370 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7428fd04-528c-4403-8f36-227127f6ee19-catalog-content\") pod \"redhat-operators-mdvtg\" (UID: \"7428fd04-528c-4403-8f36-227127f6ee19\") " pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.107234 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.162556 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7428fd04-528c-4403-8f36-227127f6ee19-catalog-content\") pod \"redhat-operators-mdvtg\" (UID: \"7428fd04-528c-4403-8f36-227127f6ee19\") " pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.162626 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpxsz\" (UniqueName: \"kubernetes.io/projected/7428fd04-528c-4403-8f36-227127f6ee19-kube-api-access-bpxsz\") pod \"redhat-operators-mdvtg\" (UID: \"7428fd04-528c-4403-8f36-227127f6ee19\") " pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.162706 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7428fd04-528c-4403-8f36-227127f6ee19-utilities\") pod \"redhat-operators-mdvtg\" (UID: \"7428fd04-528c-4403-8f36-227127f6ee19\") " pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.163073 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7428fd04-528c-4403-8f36-227127f6ee19-catalog-content\") pod \"redhat-operators-mdvtg\" (UID: \"7428fd04-528c-4403-8f36-227127f6ee19\") " pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.163165 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7428fd04-528c-4403-8f36-227127f6ee19-utilities\") pod \"redhat-operators-mdvtg\" (UID: \"7428fd04-528c-4403-8f36-227127f6ee19\") " pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.181552 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpxsz\" (UniqueName: \"kubernetes.io/projected/7428fd04-528c-4403-8f36-227127f6ee19-kube-api-access-bpxsz\") pod \"redhat-operators-mdvtg\" (UID: \"7428fd04-528c-4403-8f36-227127f6ee19\") " pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.331688 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.513465 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vr9gm"] Oct 06 07:20:45 crc kubenswrapper[4769]: W1006 07:20:45.519525 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0efcfec_1046_4b55_8fed_8f271f6a9d99.slice/crio-8b923326b3a9fa40b3fdc54a61d254236567eafec1171fac7588ca6537ffaf46 WatchSource:0}: Error finding container 8b923326b3a9fa40b3fdc54a61d254236567eafec1171fac7588ca6537ffaf46: Status 404 returned error can't find the container with id 8b923326b3a9fa40b3fdc54a61d254236567eafec1171fac7588ca6537ffaf46 Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.666776 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-glqgj" event={"ID":"9cfd3cb7-51a3-4926-ad98-533e3285dea9","Type":"ContainerStarted","Data":"8cea2b8e86ec09235548f07dd4e5dc9a46e59033a09894335e989fd1bfc16a83"} Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.668061 4769 generic.go:334] "Generic (PLEG): container finished" podID="a0efcfec-1046-4b55-8fed-8f271f6a9d99" containerID="ff3ba54c2f89ea94ab007dda130c02b4eaf088753861febcaef06a5d8f7a84b2" exitCode=0 Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.668125 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vr9gm" event={"ID":"a0efcfec-1046-4b55-8fed-8f271f6a9d99","Type":"ContainerDied","Data":"ff3ba54c2f89ea94ab007dda130c02b4eaf088753861febcaef06a5d8f7a84b2"} Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.668182 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vr9gm" event={"ID":"a0efcfec-1046-4b55-8fed-8f271f6a9d99","Type":"ContainerStarted","Data":"8b923326b3a9fa40b3fdc54a61d254236567eafec1171fac7588ca6537ffaf46"} Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.671884 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttqgb" event={"ID":"931992f0-da1a-4ed5-80fa-268fe083c2d8","Type":"ContainerStarted","Data":"ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023"} Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.690761 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-glqgj" podStartSLOduration=2.263920585 podStartE2EDuration="3.690745389s" podCreationTimestamp="2025-10-06 07:20:42 +0000 UTC" firstStartedPulling="2025-10-06 07:20:43.647326841 +0000 UTC m=+240.171607988" lastFinishedPulling="2025-10-06 07:20:45.074151645 +0000 UTC m=+241.598432792" observedRunningTime="2025-10-06 07:20:45.690164072 +0000 UTC m=+242.214445249" watchObservedRunningTime="2025-10-06 07:20:45.690745389 +0000 UTC m=+242.215026536" Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.706931 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mdvtg"] Oct 06 07:20:45 crc kubenswrapper[4769]: I1006 07:20:45.708270 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ttqgb" podStartSLOduration=2.274902922 podStartE2EDuration="3.708248355s" podCreationTimestamp="2025-10-06 07:20:42 +0000 UTC" firstStartedPulling="2025-10-06 07:20:43.648630828 +0000 UTC m=+240.172911975" lastFinishedPulling="2025-10-06 07:20:45.081976271 +0000 UTC m=+241.606257408" observedRunningTime="2025-10-06 07:20:45.706325691 +0000 UTC m=+242.230606838" watchObservedRunningTime="2025-10-06 07:20:45.708248355 +0000 UTC m=+242.232529502" Oct 06 07:20:45 crc kubenswrapper[4769]: W1006 07:20:45.715278 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7428fd04_528c_4403_8f36_227127f6ee19.slice/crio-57f320e24d3f5af15d8209c02d815c642035729bc85d3fae3c2c1ae5811b6fc1 WatchSource:0}: Error finding container 57f320e24d3f5af15d8209c02d815c642035729bc85d3fae3c2c1ae5811b6fc1: Status 404 returned error can't find the container with id 57f320e24d3f5af15d8209c02d815c642035729bc85d3fae3c2c1ae5811b6fc1 Oct 06 07:20:46 crc kubenswrapper[4769]: I1006 07:20:46.678233 4769 generic.go:334] "Generic (PLEG): container finished" podID="7428fd04-528c-4403-8f36-227127f6ee19" containerID="90dce64246e88e939872e356fcfbaa841aaec2ab25fdb517638051a4a11c0fd5" exitCode=0 Oct 06 07:20:46 crc kubenswrapper[4769]: I1006 07:20:46.678412 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdvtg" event={"ID":"7428fd04-528c-4403-8f36-227127f6ee19","Type":"ContainerDied","Data":"90dce64246e88e939872e356fcfbaa841aaec2ab25fdb517638051a4a11c0fd5"} Oct 06 07:20:46 crc kubenswrapper[4769]: I1006 07:20:46.679565 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdvtg" event={"ID":"7428fd04-528c-4403-8f36-227127f6ee19","Type":"ContainerStarted","Data":"57f320e24d3f5af15d8209c02d815c642035729bc85d3fae3c2c1ae5811b6fc1"} Oct 06 07:20:46 crc kubenswrapper[4769]: I1006 07:20:46.688485 4769 generic.go:334] "Generic (PLEG): container finished" podID="a0efcfec-1046-4b55-8fed-8f271f6a9d99" containerID="9891c8ba8f4804813fc47054887c8a37b4cc355e5f2420979c2c71dd1cb01537" exitCode=0 Oct 06 07:20:46 crc kubenswrapper[4769]: I1006 07:20:46.688626 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vr9gm" event={"ID":"a0efcfec-1046-4b55-8fed-8f271f6a9d99","Type":"ContainerDied","Data":"9891c8ba8f4804813fc47054887c8a37b4cc355e5f2420979c2c71dd1cb01537"} Oct 06 07:20:47 crc kubenswrapper[4769]: I1006 07:20:47.695396 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdvtg" event={"ID":"7428fd04-528c-4403-8f36-227127f6ee19","Type":"ContainerStarted","Data":"9c996ab17be42e68fd6b28852a262b7c68fe373af1a32826ef24bd0a5559f226"} Oct 06 07:20:47 crc kubenswrapper[4769]: I1006 07:20:47.698168 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vr9gm" event={"ID":"a0efcfec-1046-4b55-8fed-8f271f6a9d99","Type":"ContainerStarted","Data":"ba2e73e550e0baa2afc99b81bffd3439a2586daa3350da114aa16eb2c5358269"} Oct 06 07:20:47 crc kubenswrapper[4769]: I1006 07:20:47.739470 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vr9gm" podStartSLOduration=2.189101234 podStartE2EDuration="3.739401361s" podCreationTimestamp="2025-10-06 07:20:44 +0000 UTC" firstStartedPulling="2025-10-06 07:20:45.670286059 +0000 UTC m=+242.194567206" lastFinishedPulling="2025-10-06 07:20:47.220586186 +0000 UTC m=+243.744867333" observedRunningTime="2025-10-06 07:20:47.73649054 +0000 UTC m=+244.260771707" watchObservedRunningTime="2025-10-06 07:20:47.739401361 +0000 UTC m=+244.263682548" Oct 06 07:20:48 crc kubenswrapper[4769]: I1006 07:20:48.703184 4769 generic.go:334] "Generic (PLEG): container finished" podID="7428fd04-528c-4403-8f36-227127f6ee19" containerID="9c996ab17be42e68fd6b28852a262b7c68fe373af1a32826ef24bd0a5559f226" exitCode=0 Oct 06 07:20:48 crc kubenswrapper[4769]: I1006 07:20:48.703302 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdvtg" event={"ID":"7428fd04-528c-4403-8f36-227127f6ee19","Type":"ContainerDied","Data":"9c996ab17be42e68fd6b28852a262b7c68fe373af1a32826ef24bd0a5559f226"} Oct 06 07:20:49 crc kubenswrapper[4769]: I1006 07:20:49.710029 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mdvtg" event={"ID":"7428fd04-528c-4403-8f36-227127f6ee19","Type":"ContainerStarted","Data":"f90e72d2cf85021884f06b5b1a31bbcd1e09dcd8c223f2ff30ef4adcb44e8f53"} Oct 06 07:20:52 crc kubenswrapper[4769]: I1006 07:20:52.699404 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:52 crc kubenswrapper[4769]: I1006 07:20:52.699565 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:52 crc kubenswrapper[4769]: I1006 07:20:52.743176 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:52 crc kubenswrapper[4769]: I1006 07:20:52.766319 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mdvtg" podStartSLOduration=6.317062452 podStartE2EDuration="8.766302003s" podCreationTimestamp="2025-10-06 07:20:44 +0000 UTC" firstStartedPulling="2025-10-06 07:20:46.680281622 +0000 UTC m=+243.204562769" lastFinishedPulling="2025-10-06 07:20:49.129521163 +0000 UTC m=+245.653802320" observedRunningTime="2025-10-06 07:20:49.733584809 +0000 UTC m=+246.257865956" watchObservedRunningTime="2025-10-06 07:20:52.766302003 +0000 UTC m=+249.290583170" Oct 06 07:20:52 crc kubenswrapper[4769]: I1006 07:20:52.794762 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-glqgj" Oct 06 07:20:52 crc kubenswrapper[4769]: I1006 07:20:52.932252 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:52 crc kubenswrapper[4769]: I1006 07:20:52.932632 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:52 crc kubenswrapper[4769]: I1006 07:20:52.968556 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:53 crc kubenswrapper[4769]: I1006 07:20:53.773397 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:20:55 crc kubenswrapper[4769]: I1006 07:20:55.108660 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:55 crc kubenswrapper[4769]: I1006 07:20:55.108729 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:55 crc kubenswrapper[4769]: I1006 07:20:55.148655 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:20:55 crc kubenswrapper[4769]: I1006 07:20:55.332150 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:55 crc kubenswrapper[4769]: I1006 07:20:55.332392 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:55 crc kubenswrapper[4769]: I1006 07:20:55.373507 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:55 crc kubenswrapper[4769]: I1006 07:20:55.795747 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mdvtg" Oct 06 07:20:55 crc kubenswrapper[4769]: I1006 07:20:55.803494 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vr9gm" Oct 06 07:22:22 crc kubenswrapper[4769]: I1006 07:22:22.245116 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:22:22 crc kubenswrapper[4769]: I1006 07:22:22.245749 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:22:52 crc kubenswrapper[4769]: I1006 07:22:52.245748 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:22:52 crc kubenswrapper[4769]: I1006 07:22:52.246217 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:23:22 crc kubenswrapper[4769]: I1006 07:23:22.245805 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:23:22 crc kubenswrapper[4769]: I1006 07:23:22.246356 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:23:22 crc kubenswrapper[4769]: I1006 07:23:22.246404 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:23:22 crc kubenswrapper[4769]: I1006 07:23:22.247052 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11e59b2b2ebfa1dcc1e964ac9bc1a81efea85c20205e4795427a2b0a5e17c9e6"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 07:23:22 crc kubenswrapper[4769]: I1006 07:23:22.247107 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://11e59b2b2ebfa1dcc1e964ac9bc1a81efea85c20205e4795427a2b0a5e17c9e6" gracePeriod=600 Oct 06 07:23:22 crc kubenswrapper[4769]: I1006 07:23:22.570198 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="11e59b2b2ebfa1dcc1e964ac9bc1a81efea85c20205e4795427a2b0a5e17c9e6" exitCode=0 Oct 06 07:23:22 crc kubenswrapper[4769]: I1006 07:23:22.570298 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"11e59b2b2ebfa1dcc1e964ac9bc1a81efea85c20205e4795427a2b0a5e17c9e6"} Oct 06 07:23:22 crc kubenswrapper[4769]: I1006 07:23:22.570855 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"110402474b39777a830d4db80b735713b2c2b3181e687b4bbba006beefb9f9e9"} Oct 06 07:23:22 crc kubenswrapper[4769]: I1006 07:23:22.570896 4769 scope.go:117] "RemoveContainer" containerID="19b35a8ce381345024662ab6d6f8a38279262c4ac8ecab5c35da191d1afe2205" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.672061 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jlgts"] Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.673943 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.690875 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jlgts"] Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.858699 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/757aa636-239f-477c-a2e4-bc6821f1fbe2-bound-sa-token\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.858755 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl8c7\" (UniqueName: \"kubernetes.io/projected/757aa636-239f-477c-a2e4-bc6821f1fbe2-kube-api-access-xl8c7\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.858790 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/757aa636-239f-477c-a2e4-bc6821f1fbe2-registry-certificates\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.858830 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/757aa636-239f-477c-a2e4-bc6821f1fbe2-trusted-ca\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.858866 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/757aa636-239f-477c-a2e4-bc6821f1fbe2-registry-tls\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.858924 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/757aa636-239f-477c-a2e4-bc6821f1fbe2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.859029 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.859079 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/757aa636-239f-477c-a2e4-bc6821f1fbe2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.880732 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.960614 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/757aa636-239f-477c-a2e4-bc6821f1fbe2-trusted-ca\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.960729 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/757aa636-239f-477c-a2e4-bc6821f1fbe2-registry-tls\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.960770 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/757aa636-239f-477c-a2e4-bc6821f1fbe2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.960835 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/757aa636-239f-477c-a2e4-bc6821f1fbe2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.960896 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/757aa636-239f-477c-a2e4-bc6821f1fbe2-bound-sa-token\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.960942 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl8c7\" (UniqueName: \"kubernetes.io/projected/757aa636-239f-477c-a2e4-bc6821f1fbe2-kube-api-access-xl8c7\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.960989 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/757aa636-239f-477c-a2e4-bc6821f1fbe2-registry-certificates\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.961736 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/757aa636-239f-477c-a2e4-bc6821f1fbe2-trusted-ca\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.962036 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/757aa636-239f-477c-a2e4-bc6821f1fbe2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.963056 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/757aa636-239f-477c-a2e4-bc6821f1fbe2-registry-certificates\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.967499 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/757aa636-239f-477c-a2e4-bc6821f1fbe2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.967987 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/757aa636-239f-477c-a2e4-bc6821f1fbe2-registry-tls\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.983040 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/757aa636-239f-477c-a2e4-bc6821f1fbe2-bound-sa-token\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:37 crc kubenswrapper[4769]: I1006 07:24:37.985881 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl8c7\" (UniqueName: \"kubernetes.io/projected/757aa636-239f-477c-a2e4-bc6821f1fbe2-kube-api-access-xl8c7\") pod \"image-registry-66df7c8f76-jlgts\" (UID: \"757aa636-239f-477c-a2e4-bc6821f1fbe2\") " pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:38 crc kubenswrapper[4769]: I1006 07:24:38.001196 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:38 crc kubenswrapper[4769]: I1006 07:24:38.463790 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jlgts"] Oct 06 07:24:39 crc kubenswrapper[4769]: I1006 07:24:39.071402 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" event={"ID":"757aa636-239f-477c-a2e4-bc6821f1fbe2","Type":"ContainerStarted","Data":"acbd8b9e783ed6c395d6c44222646db6a1029c280da623c331fe7a6c2693a06a"} Oct 06 07:24:39 crc kubenswrapper[4769]: I1006 07:24:39.071795 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" event={"ID":"757aa636-239f-477c-a2e4-bc6821f1fbe2","Type":"ContainerStarted","Data":"022db9bfd69fb571a08feb444954ecffdf58c0e46b6f3ebaadd3b6a966c1c98e"} Oct 06 07:24:39 crc kubenswrapper[4769]: I1006 07:24:39.071817 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:39 crc kubenswrapper[4769]: I1006 07:24:39.099603 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" podStartSLOduration=2.099576157 podStartE2EDuration="2.099576157s" podCreationTimestamp="2025-10-06 07:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:24:39.094864757 +0000 UTC m=+475.619145974" watchObservedRunningTime="2025-10-06 07:24:39.099576157 +0000 UTC m=+475.623857314" Oct 06 07:24:58 crc kubenswrapper[4769]: I1006 07:24:58.007174 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jlgts" Oct 06 07:24:58 crc kubenswrapper[4769]: I1006 07:24:58.068275 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h7lhw"] Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.203383 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-kp88x"] Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.204734 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-kp88x" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.206507 4769 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-v4vlh" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.209751 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.209900 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.215996 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-l2jg6"] Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.216626 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-l2jg6" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.218340 4769 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-kvzx2" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.224872 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-kp88x"] Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.238277 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-l2jg6"] Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.256220 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-k52gj"] Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.257152 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-k52gj" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.259588 4769 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-672g8" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.261008 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-k52gj"] Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.336378 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwzkg\" (UniqueName: \"kubernetes.io/projected/ec86b22f-51e9-42f9-b4ce-357840aebe09-kube-api-access-fwzkg\") pod \"cert-manager-cainjector-7f985d654d-kp88x\" (UID: \"ec86b22f-51e9-42f9-b4ce-357840aebe09\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-kp88x" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.336771 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65rtr\" (UniqueName: \"kubernetes.io/projected/259cbbd3-4950-4cbf-bcd0-ca39e1d77078-kube-api-access-65rtr\") pod \"cert-manager-5b446d88c5-l2jg6\" (UID: \"259cbbd3-4950-4cbf-bcd0-ca39e1d77078\") " pod="cert-manager/cert-manager-5b446d88c5-l2jg6" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.438743 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65rtr\" (UniqueName: \"kubernetes.io/projected/259cbbd3-4950-4cbf-bcd0-ca39e1d77078-kube-api-access-65rtr\") pod \"cert-manager-5b446d88c5-l2jg6\" (UID: \"259cbbd3-4950-4cbf-bcd0-ca39e1d77078\") " pod="cert-manager/cert-manager-5b446d88c5-l2jg6" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.439049 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4r2c\" (UniqueName: \"kubernetes.io/projected/6ac3e82d-c1b5-4384-87f3-7859e7f0c02a-kube-api-access-b4r2c\") pod \"cert-manager-webhook-5655c58dd6-k52gj\" (UID: \"6ac3e82d-c1b5-4384-87f3-7859e7f0c02a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-k52gj" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.439198 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwzkg\" (UniqueName: \"kubernetes.io/projected/ec86b22f-51e9-42f9-b4ce-357840aebe09-kube-api-access-fwzkg\") pod \"cert-manager-cainjector-7f985d654d-kp88x\" (UID: \"ec86b22f-51e9-42f9-b4ce-357840aebe09\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-kp88x" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.456487 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwzkg\" (UniqueName: \"kubernetes.io/projected/ec86b22f-51e9-42f9-b4ce-357840aebe09-kube-api-access-fwzkg\") pod \"cert-manager-cainjector-7f985d654d-kp88x\" (UID: \"ec86b22f-51e9-42f9-b4ce-357840aebe09\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-kp88x" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.460646 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65rtr\" (UniqueName: \"kubernetes.io/projected/259cbbd3-4950-4cbf-bcd0-ca39e1d77078-kube-api-access-65rtr\") pod \"cert-manager-5b446d88c5-l2jg6\" (UID: \"259cbbd3-4950-4cbf-bcd0-ca39e1d77078\") " pod="cert-manager/cert-manager-5b446d88c5-l2jg6" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.520823 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-kp88x" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.534189 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-l2jg6" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.540192 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4r2c\" (UniqueName: \"kubernetes.io/projected/6ac3e82d-c1b5-4384-87f3-7859e7f0c02a-kube-api-access-b4r2c\") pod \"cert-manager-webhook-5655c58dd6-k52gj\" (UID: \"6ac3e82d-c1b5-4384-87f3-7859e7f0c02a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-k52gj" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.572533 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4r2c\" (UniqueName: \"kubernetes.io/projected/6ac3e82d-c1b5-4384-87f3-7859e7f0c02a-kube-api-access-b4r2c\") pod \"cert-manager-webhook-5655c58dd6-k52gj\" (UID: \"6ac3e82d-c1b5-4384-87f3-7859e7f0c02a\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-k52gj" Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.738820 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-kp88x"] Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.747513 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.786832 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-l2jg6"] Oct 06 07:25:05 crc kubenswrapper[4769]: W1006 07:25:05.796065 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod259cbbd3_4950_4cbf_bcd0_ca39e1d77078.slice/crio-b9028fe037546ddec04f95c7942f86e310c6503b3296e612054568bf1e93c390 WatchSource:0}: Error finding container b9028fe037546ddec04f95c7942f86e310c6503b3296e612054568bf1e93c390: Status 404 returned error can't find the container with id b9028fe037546ddec04f95c7942f86e310c6503b3296e612054568bf1e93c390 Oct 06 07:25:05 crc kubenswrapper[4769]: I1006 07:25:05.868860 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-k52gj" Oct 06 07:25:06 crc kubenswrapper[4769]: I1006 07:25:06.101105 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-k52gj"] Oct 06 07:25:06 crc kubenswrapper[4769]: W1006 07:25:06.107716 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ac3e82d_c1b5_4384_87f3_7859e7f0c02a.slice/crio-a526b3678a05f352c2eb73922e002f9ede391348c8296bf7d7fbe3e403392674 WatchSource:0}: Error finding container a526b3678a05f352c2eb73922e002f9ede391348c8296bf7d7fbe3e403392674: Status 404 returned error can't find the container with id a526b3678a05f352c2eb73922e002f9ede391348c8296bf7d7fbe3e403392674 Oct 06 07:25:06 crc kubenswrapper[4769]: I1006 07:25:06.242952 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-kp88x" event={"ID":"ec86b22f-51e9-42f9-b4ce-357840aebe09","Type":"ContainerStarted","Data":"7bce7fbb5c2f11c04bc8d0f62910d6e00dd40817db3a4c7e70ee46f2382fa84b"} Oct 06 07:25:06 crc kubenswrapper[4769]: I1006 07:25:06.244169 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-l2jg6" event={"ID":"259cbbd3-4950-4cbf-bcd0-ca39e1d77078","Type":"ContainerStarted","Data":"b9028fe037546ddec04f95c7942f86e310c6503b3296e612054568bf1e93c390"} Oct 06 07:25:06 crc kubenswrapper[4769]: I1006 07:25:06.245017 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-k52gj" event={"ID":"6ac3e82d-c1b5-4384-87f3-7859e7f0c02a","Type":"ContainerStarted","Data":"a526b3678a05f352c2eb73922e002f9ede391348c8296bf7d7fbe3e403392674"} Oct 06 07:25:08 crc kubenswrapper[4769]: I1006 07:25:08.257921 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-kp88x" event={"ID":"ec86b22f-51e9-42f9-b4ce-357840aebe09","Type":"ContainerStarted","Data":"3a90b462f249ac9f0bb5026bcf7e66104b19752b45e7ba43c73176a6fac4cb62"} Oct 06 07:25:08 crc kubenswrapper[4769]: I1006 07:25:08.275295 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-kp88x" podStartSLOduration=1.2911123660000001 podStartE2EDuration="3.275279654s" podCreationTimestamp="2025-10-06 07:25:05 +0000 UTC" firstStartedPulling="2025-10-06 07:25:05.747216258 +0000 UTC m=+502.271497405" lastFinishedPulling="2025-10-06 07:25:07.731383536 +0000 UTC m=+504.255664693" observedRunningTime="2025-10-06 07:25:08.27367161 +0000 UTC m=+504.797952767" watchObservedRunningTime="2025-10-06 07:25:08.275279654 +0000 UTC m=+504.799560811" Oct 06 07:25:09 crc kubenswrapper[4769]: I1006 07:25:09.263283 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-k52gj" event={"ID":"6ac3e82d-c1b5-4384-87f3-7859e7f0c02a","Type":"ContainerStarted","Data":"1cbe92f7591c0fb3d2263411e1768253318848fc3b3aebff63bf908489114b93"} Oct 06 07:25:09 crc kubenswrapper[4769]: I1006 07:25:09.263363 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-k52gj" Oct 06 07:25:09 crc kubenswrapper[4769]: I1006 07:25:09.264409 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-l2jg6" event={"ID":"259cbbd3-4950-4cbf-bcd0-ca39e1d77078","Type":"ContainerStarted","Data":"1240fad81e4519247e07f82f03dbaacf445d3a093d69b08892da8a20da6f9dea"} Oct 06 07:25:09 crc kubenswrapper[4769]: I1006 07:25:09.277098 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-k52gj" podStartSLOduration=1.442613297 podStartE2EDuration="4.277084347s" podCreationTimestamp="2025-10-06 07:25:05 +0000 UTC" firstStartedPulling="2025-10-06 07:25:06.110226205 +0000 UTC m=+502.634507352" lastFinishedPulling="2025-10-06 07:25:08.944697245 +0000 UTC m=+505.468978402" observedRunningTime="2025-10-06 07:25:09.274952638 +0000 UTC m=+505.799233785" watchObservedRunningTime="2025-10-06 07:25:09.277084347 +0000 UTC m=+505.801365494" Oct 06 07:25:09 crc kubenswrapper[4769]: I1006 07:25:09.295835 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-l2jg6" podStartSLOduration=1.109885185 podStartE2EDuration="4.295818444s" podCreationTimestamp="2025-10-06 07:25:05 +0000 UTC" firstStartedPulling="2025-10-06 07:25:05.799036298 +0000 UTC m=+502.323317445" lastFinishedPulling="2025-10-06 07:25:08.984969557 +0000 UTC m=+505.509250704" observedRunningTime="2025-10-06 07:25:09.292141872 +0000 UTC m=+505.816423019" watchObservedRunningTime="2025-10-06 07:25:09.295818444 +0000 UTC m=+505.820099591" Oct 06 07:25:15 crc kubenswrapper[4769]: I1006 07:25:15.780351 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8bknc"] Oct 06 07:25:15 crc kubenswrapper[4769]: I1006 07:25:15.781213 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovn-controller" containerID="cri-o://4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd" gracePeriod=30 Oct 06 07:25:15 crc kubenswrapper[4769]: I1006 07:25:15.781762 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="sbdb" containerID="cri-o://d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb" gracePeriod=30 Oct 06 07:25:15 crc kubenswrapper[4769]: I1006 07:25:15.781779 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c" gracePeriod=30 Oct 06 07:25:15 crc kubenswrapper[4769]: I1006 07:25:15.781845 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovn-acl-logging" containerID="cri-o://7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4" gracePeriod=30 Oct 06 07:25:15 crc kubenswrapper[4769]: I1006 07:25:15.781920 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="northd" containerID="cri-o://507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb" gracePeriod=30 Oct 06 07:25:15 crc kubenswrapper[4769]: I1006 07:25:15.781902 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="nbdb" containerID="cri-o://d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016" gracePeriod=30 Oct 06 07:25:15 crc kubenswrapper[4769]: I1006 07:25:15.782062 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="kube-rbac-proxy-node" containerID="cri-o://92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61" gracePeriod=30 Oct 06 07:25:15 crc kubenswrapper[4769]: I1006 07:25:15.844062 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" containerID="cri-o://4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65" gracePeriod=30 Oct 06 07:25:15 crc kubenswrapper[4769]: I1006 07:25:15.872125 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-k52gj" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.126387 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/3.log" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.128596 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovn-acl-logging/0.log" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.129113 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovn-controller/0.log" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.129771 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.185758 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7fdks"] Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.185971 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.185991 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.186005 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186014 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.186027 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="kube-rbac-proxy-node" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186035 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="kube-rbac-proxy-node" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.186050 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovn-acl-logging" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186057 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovn-acl-logging" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.186069 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovn-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186077 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovn-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.186091 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="kubecfg-setup" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186099 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="kubecfg-setup" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.186112 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="nbdb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186119 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="nbdb" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.186132 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="kube-rbac-proxy-ovn-metrics" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186140 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="kube-rbac-proxy-ovn-metrics" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.186152 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="northd" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186160 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="northd" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.186173 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="sbdb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186180 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="sbdb" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.186191 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186198 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186310 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="kube-rbac-proxy-ovn-metrics" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186325 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186334 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186343 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186354 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="nbdb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186364 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="sbdb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186374 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovn-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186384 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="kube-rbac-proxy-node" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186397 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovn-acl-logging" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186409 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="northd" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.186539 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186549 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.186561 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186568 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186742 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.186754 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="084bbba5-5940-4065-a799-2e6baff2338d" containerName="ovnkube-controller" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.189009 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.212796 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-slash\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.212861 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-slash" (OuterVolumeSpecName: "host-slash") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.212907 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-ovnkube-script-lib\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.212958 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-kubelet\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.212996 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-node-log\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213039 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-systemd-units\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213084 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-ovnkube-config\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213120 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/084bbba5-5940-4065-a799-2e6baff2338d-ovn-node-metrics-cert\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213148 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-log-socket\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213176 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-openvswitch\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213207 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-var-lib-openvswitch\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213233 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-etc-openvswitch\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213282 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-run-netns\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213451 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213581 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-systemd-units\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213627 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-etc-openvswitch\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213580 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213649 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213580 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-node-log" (OuterVolumeSpecName: "node-log") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213615 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213615 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-log-socket" (OuterVolumeSpecName: "log-socket") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213619 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213637 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213641 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213742 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4124fae6-8611-4073-a1ee-f4b23f3b0208-ovn-node-metrics-cert\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213808 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-run-ovn\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213839 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4124fae6-8611-4073-a1ee-f4b23f3b0208-env-overrides\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213875 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-run-ovn-kubernetes\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213927 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-slash\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213955 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-run-netns\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.213974 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-var-lib-openvswitch\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214007 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mrbk\" (UniqueName: \"kubernetes.io/projected/4124fae6-8611-4073-a1ee-f4b23f3b0208-kube-api-access-7mrbk\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214025 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-cni-netd\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214070 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4124fae6-8611-4073-a1ee-f4b23f3b0208-ovnkube-config\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214099 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-run-openvswitch\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214165 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-node-log\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214195 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-log-socket\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214215 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-cni-bin\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214231 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4124fae6-8611-4073-a1ee-f4b23f3b0208-ovnkube-script-lib\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214268 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-kubelet\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214322 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-run-systemd\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214377 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214495 4769 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214512 4769 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-log-socket\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214524 4769 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214534 4769 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214542 4769 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214551 4769 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214561 4769 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214569 4769 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-slash\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214576 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.214584 4769 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-node-log\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.215215 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.220505 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/084bbba5-5940-4065-a799-2e6baff2338d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.304715 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovnkube-controller/3.log" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.307353 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovn-acl-logging/0.log" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.307864 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8bknc_084bbba5-5940-4065-a799-2e6baff2338d/ovn-controller/0.log" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308276 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65" exitCode=0 Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308352 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb" exitCode=0 Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308448 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016" exitCode=0 Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308507 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb" exitCode=0 Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308559 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c" exitCode=0 Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308608 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61" exitCode=0 Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308665 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4" exitCode=143 Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308722 4769 generic.go:334] "Generic (PLEG): container finished" podID="084bbba5-5940-4065-a799-2e6baff2338d" containerID="4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd" exitCode=143 Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308387 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308369 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308931 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308999 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309060 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.308951 4769 scope.go:117] "RemoveContainer" containerID="4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309113 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309244 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309266 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309279 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309287 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309294 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309302 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309309 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309315 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309322 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309329 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309341 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309353 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309361 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309368 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309374 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309381 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309387 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309393 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309400 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309407 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309411 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309434 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309443 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309449 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309454 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309459 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309464 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309470 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309474 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309480 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309484 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309491 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309497 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8bknc" event={"ID":"084bbba5-5940-4065-a799-2e6baff2338d","Type":"ContainerDied","Data":"6252a133a51df23fc250984851eb438a448f6822704a58a273d089de0e30221c"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309504 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309512 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309517 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309522 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309527 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309533 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309539 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309547 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309554 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.309561 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.311172 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjjvp_3b98abd5-990e-494c-a2a5-526fae1bd5ec/kube-multus/2.log" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.311762 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjjvp_3b98abd5-990e-494c-a2a5-526fae1bd5ec/kube-multus/1.log" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.311800 4769 generic.go:334] "Generic (PLEG): container finished" podID="3b98abd5-990e-494c-a2a5-526fae1bd5ec" containerID="7040c00045ef3e10db0e433c802a4cfadffb3fb10f446ba9b7843be295d98ac3" exitCode=2 Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.311821 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjjvp" event={"ID":"3b98abd5-990e-494c-a2a5-526fae1bd5ec","Type":"ContainerDied","Data":"7040c00045ef3e10db0e433c802a4cfadffb3fb10f446ba9b7843be295d98ac3"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.311834 4769 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b"} Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.312173 4769 scope.go:117] "RemoveContainer" containerID="7040c00045ef3e10db0e433c802a4cfadffb3fb10f446ba9b7843be295d98ac3" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.312415 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cjjvp_openshift-multus(3b98abd5-990e-494c-a2a5-526fae1bd5ec)\"" pod="openshift-multus/multus-cjjvp" podUID="3b98abd5-990e-494c-a2a5-526fae1bd5ec" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.316500 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-env-overrides\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.316601 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sskw8\" (UniqueName: \"kubernetes.io/projected/084bbba5-5940-4065-a799-2e6baff2338d-kube-api-access-sskw8\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.316710 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-systemd\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.316778 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-run-ovn-kubernetes\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.316843 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-cni-netd\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.316908 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.316974 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-ovn\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.317052 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-cni-bin\") pod \"084bbba5-5940-4065-a799-2e6baff2338d\" (UID: \"084bbba5-5940-4065-a799-2e6baff2338d\") " Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.316923 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.317159 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.317177 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.317190 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.317247 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.317331 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.317548 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrbk\" (UniqueName: \"kubernetes.io/projected/4124fae6-8611-4073-a1ee-f4b23f3b0208-kube-api-access-7mrbk\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.317620 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-cni-netd\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.317707 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4124fae6-8611-4073-a1ee-f4b23f3b0208-ovnkube-config\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.317720 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-cni-netd\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.318635 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-run-openvswitch\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.317780 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-run-openvswitch\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.320230 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4124fae6-8611-4073-a1ee-f4b23f3b0208-ovnkube-config\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.320431 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-node-log\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.320539 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-log-socket\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.320668 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-cni-bin\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.320709 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4124fae6-8611-4073-a1ee-f4b23f3b0208-ovnkube-script-lib\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.320775 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-run-systemd\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.320828 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.320864 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-kubelet\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.320949 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-systemd-units\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.321565 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/084bbba5-5940-4065-a799-2e6baff2338d-kube-api-access-sskw8" (OuterVolumeSpecName: "kube-api-access-sskw8") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "kube-api-access-sskw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.321640 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-etc-openvswitch\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.321817 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-node-log\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.321869 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4124fae6-8611-4073-a1ee-f4b23f3b0208-ovn-node-metrics-cert\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.321981 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-log-socket\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.322080 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-run-ovn\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.322238 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4124fae6-8611-4073-a1ee-f4b23f3b0208-env-overrides\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.322877 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-run-ovn-kubernetes\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.323047 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-slash\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.331296 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-run-netns\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.331358 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-var-lib-openvswitch\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.323230 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-systemd-units\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.323212 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-run-systemd\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.331673 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-run-netns\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.323230 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-kubelet\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.323556 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-slash\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.323730 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.323840 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4124fae6-8611-4073-a1ee-f4b23f3b0208-env-overrides\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.324385 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-run-ovn-kubernetes\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.327099 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4124fae6-8611-4073-a1ee-f4b23f3b0208-ovn-node-metrics-cert\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.331068 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4124fae6-8611-4073-a1ee-f4b23f3b0208-ovnkube-script-lib\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.331564 4769 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.331865 4769 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.331882 4769 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.323244 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-host-cni-bin\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.323306 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-run-ovn\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.331906 4769 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.332001 4769 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.332030 4769 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.332088 4769 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/084bbba5-5940-4065-a799-2e6baff2338d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.332102 4769 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/084bbba5-5940-4065-a799-2e6baff2338d-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.332111 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sskw8\" (UniqueName: \"kubernetes.io/projected/084bbba5-5940-4065-a799-2e6baff2338d-kube-api-access-sskw8\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.331627 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-var-lib-openvswitch\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.321678 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4124fae6-8611-4073-a1ee-f4b23f3b0208-etc-openvswitch\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.333362 4769 scope.go:117] "RemoveContainer" containerID="f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.341300 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mrbk\" (UniqueName: \"kubernetes.io/projected/4124fae6-8611-4073-a1ee-f4b23f3b0208-kube-api-access-7mrbk\") pod \"ovnkube-node-7fdks\" (UID: \"4124fae6-8611-4073-a1ee-f4b23f3b0208\") " pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.343079 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "084bbba5-5940-4065-a799-2e6baff2338d" (UID: "084bbba5-5940-4065-a799-2e6baff2338d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.353694 4769 scope.go:117] "RemoveContainer" containerID="d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.369507 4769 scope.go:117] "RemoveContainer" containerID="d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.385810 4769 scope.go:117] "RemoveContainer" containerID="507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.401682 4769 scope.go:117] "RemoveContainer" containerID="c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.418322 4769 scope.go:117] "RemoveContainer" containerID="92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.430591 4769 scope.go:117] "RemoveContainer" containerID="7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.433248 4769 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/084bbba5-5940-4065-a799-2e6baff2338d-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.444660 4769 scope.go:117] "RemoveContainer" containerID="4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.460643 4769 scope.go:117] "RemoveContainer" containerID="9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.475322 4769 scope.go:117] "RemoveContainer" containerID="4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.475744 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65\": container with ID starting with 4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65 not found: ID does not exist" containerID="4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.475778 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65"} err="failed to get container status \"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65\": rpc error: code = NotFound desc = could not find container \"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65\": container with ID starting with 4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.475803 4769 scope.go:117] "RemoveContainer" containerID="f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.476057 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\": container with ID starting with f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629 not found: ID does not exist" containerID="f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.476078 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629"} err="failed to get container status \"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\": rpc error: code = NotFound desc = could not find container \"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\": container with ID starting with f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.476089 4769 scope.go:117] "RemoveContainer" containerID="d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.476365 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\": container with ID starting with d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb not found: ID does not exist" containerID="d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.476385 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb"} err="failed to get container status \"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\": rpc error: code = NotFound desc = could not find container \"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\": container with ID starting with d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.476397 4769 scope.go:117] "RemoveContainer" containerID="d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.476601 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\": container with ID starting with d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016 not found: ID does not exist" containerID="d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.476622 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016"} err="failed to get container status \"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\": rpc error: code = NotFound desc = could not find container \"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\": container with ID starting with d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.476635 4769 scope.go:117] "RemoveContainer" containerID="507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.476813 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\": container with ID starting with 507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb not found: ID does not exist" containerID="507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.476829 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb"} err="failed to get container status \"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\": rpc error: code = NotFound desc = could not find container \"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\": container with ID starting with 507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.476840 4769 scope.go:117] "RemoveContainer" containerID="c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.477192 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\": container with ID starting with c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c not found: ID does not exist" containerID="c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.477220 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c"} err="failed to get container status \"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\": rpc error: code = NotFound desc = could not find container \"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\": container with ID starting with c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.477232 4769 scope.go:117] "RemoveContainer" containerID="92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.477505 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\": container with ID starting with 92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61 not found: ID does not exist" containerID="92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.477526 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61"} err="failed to get container status \"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\": rpc error: code = NotFound desc = could not find container \"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\": container with ID starting with 92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.477543 4769 scope.go:117] "RemoveContainer" containerID="7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.477807 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\": container with ID starting with 7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4 not found: ID does not exist" containerID="7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.477862 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4"} err="failed to get container status \"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\": rpc error: code = NotFound desc = could not find container \"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\": container with ID starting with 7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.477904 4769 scope.go:117] "RemoveContainer" containerID="4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.478174 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\": container with ID starting with 4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd not found: ID does not exist" containerID="4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.478199 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd"} err="failed to get container status \"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\": rpc error: code = NotFound desc = could not find container \"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\": container with ID starting with 4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.478215 4769 scope.go:117] "RemoveContainer" containerID="9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537" Oct 06 07:25:16 crc kubenswrapper[4769]: E1006 07:25:16.478466 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\": container with ID starting with 9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537 not found: ID does not exist" containerID="9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.478505 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537"} err="failed to get container status \"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\": rpc error: code = NotFound desc = could not find container \"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\": container with ID starting with 9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.478524 4769 scope.go:117] "RemoveContainer" containerID="4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.478856 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65"} err="failed to get container status \"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65\": rpc error: code = NotFound desc = could not find container \"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65\": container with ID starting with 4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.478927 4769 scope.go:117] "RemoveContainer" containerID="f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.479185 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629"} err="failed to get container status \"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\": rpc error: code = NotFound desc = could not find container \"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\": container with ID starting with f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.479211 4769 scope.go:117] "RemoveContainer" containerID="d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.479550 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb"} err="failed to get container status \"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\": rpc error: code = NotFound desc = could not find container \"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\": container with ID starting with d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.479591 4769 scope.go:117] "RemoveContainer" containerID="d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.479818 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016"} err="failed to get container status \"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\": rpc error: code = NotFound desc = could not find container \"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\": container with ID starting with d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.479839 4769 scope.go:117] "RemoveContainer" containerID="507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.480069 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb"} err="failed to get container status \"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\": rpc error: code = NotFound desc = could not find container \"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\": container with ID starting with 507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.480100 4769 scope.go:117] "RemoveContainer" containerID="c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.480333 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c"} err="failed to get container status \"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\": rpc error: code = NotFound desc = could not find container \"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\": container with ID starting with c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.480358 4769 scope.go:117] "RemoveContainer" containerID="92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.480578 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61"} err="failed to get container status \"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\": rpc error: code = NotFound desc = could not find container \"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\": container with ID starting with 92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.480603 4769 scope.go:117] "RemoveContainer" containerID="7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.480799 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4"} err="failed to get container status \"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\": rpc error: code = NotFound desc = could not find container \"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\": container with ID starting with 7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.480819 4769 scope.go:117] "RemoveContainer" containerID="4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.481080 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd"} err="failed to get container status \"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\": rpc error: code = NotFound desc = could not find container \"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\": container with ID starting with 4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.481125 4769 scope.go:117] "RemoveContainer" containerID="9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.481338 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537"} err="failed to get container status \"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\": rpc error: code = NotFound desc = could not find container \"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\": container with ID starting with 9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.481360 4769 scope.go:117] "RemoveContainer" containerID="4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.481541 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65"} err="failed to get container status \"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65\": rpc error: code = NotFound desc = could not find container \"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65\": container with ID starting with 4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.481561 4769 scope.go:117] "RemoveContainer" containerID="f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.481707 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629"} err="failed to get container status \"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\": rpc error: code = NotFound desc = could not find container \"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\": container with ID starting with f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.481751 4769 scope.go:117] "RemoveContainer" containerID="d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.481922 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb"} err="failed to get container status \"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\": rpc error: code = NotFound desc = could not find container \"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\": container with ID starting with d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.481944 4769 scope.go:117] "RemoveContainer" containerID="d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.482087 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016"} err="failed to get container status \"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\": rpc error: code = NotFound desc = could not find container \"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\": container with ID starting with d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.482107 4769 scope.go:117] "RemoveContainer" containerID="507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.482258 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb"} err="failed to get container status \"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\": rpc error: code = NotFound desc = could not find container \"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\": container with ID starting with 507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.482274 4769 scope.go:117] "RemoveContainer" containerID="c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.482438 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c"} err="failed to get container status \"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\": rpc error: code = NotFound desc = could not find container \"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\": container with ID starting with c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.482458 4769 scope.go:117] "RemoveContainer" containerID="92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.482616 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61"} err="failed to get container status \"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\": rpc error: code = NotFound desc = could not find container \"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\": container with ID starting with 92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.482632 4769 scope.go:117] "RemoveContainer" containerID="7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.482772 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4"} err="failed to get container status \"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\": rpc error: code = NotFound desc = could not find container \"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\": container with ID starting with 7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.482788 4769 scope.go:117] "RemoveContainer" containerID="4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.482937 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd"} err="failed to get container status \"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\": rpc error: code = NotFound desc = could not find container \"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\": container with ID starting with 4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.482955 4769 scope.go:117] "RemoveContainer" containerID="9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.483093 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537"} err="failed to get container status \"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\": rpc error: code = NotFound desc = could not find container \"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\": container with ID starting with 9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.483108 4769 scope.go:117] "RemoveContainer" containerID="4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.483258 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65"} err="failed to get container status \"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65\": rpc error: code = NotFound desc = could not find container \"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65\": container with ID starting with 4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.483277 4769 scope.go:117] "RemoveContainer" containerID="f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.483441 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629"} err="failed to get container status \"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\": rpc error: code = NotFound desc = could not find container \"f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629\": container with ID starting with f1a0592f01fd07db5aa2a039e8fc74fe0eff60ff103825b5db694b3d90412629 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.483460 4769 scope.go:117] "RemoveContainer" containerID="d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.483609 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb"} err="failed to get container status \"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\": rpc error: code = NotFound desc = could not find container \"d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb\": container with ID starting with d365614b59de1a3bee06ecfd9876e0c1600fd402db1418dd72764d906fc4cffb not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.483625 4769 scope.go:117] "RemoveContainer" containerID="d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.483766 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016"} err="failed to get container status \"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\": rpc error: code = NotFound desc = could not find container \"d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016\": container with ID starting with d5a881aec42e67ecadb8e681b634b5b65b479f5161bffb8f6b021182ea4e3016 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.483784 4769 scope.go:117] "RemoveContainer" containerID="507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.483933 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb"} err="failed to get container status \"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\": rpc error: code = NotFound desc = could not find container \"507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb\": container with ID starting with 507fa9f7a423d2f20d6f6070abee7405649bc06eb6f83c912143bd13727fb7cb not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.483950 4769 scope.go:117] "RemoveContainer" containerID="c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.484090 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c"} err="failed to get container status \"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\": rpc error: code = NotFound desc = could not find container \"c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c\": container with ID starting with c1d46d8705e53841f17e8ee896751c5f42e14e3143a557073accecc4f87a830c not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.484109 4769 scope.go:117] "RemoveContainer" containerID="92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.484289 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61"} err="failed to get container status \"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\": rpc error: code = NotFound desc = could not find container \"92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61\": container with ID starting with 92b9d36a2e121e4ff38cbc6925c143401c696d1d24094060459034754fda7d61 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.484305 4769 scope.go:117] "RemoveContainer" containerID="7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.484463 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4"} err="failed to get container status \"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\": rpc error: code = NotFound desc = could not find container \"7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4\": container with ID starting with 7b1e5e51485e306b166959d564cc4e9a936867b9920735a51c1662b4bd8372e4 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.484480 4769 scope.go:117] "RemoveContainer" containerID="4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.484626 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd"} err="failed to get container status \"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\": rpc error: code = NotFound desc = could not find container \"4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd\": container with ID starting with 4a3b2797814e3fd000c82d3f5c79040020f0b5146189a8c300f78e18d29bdbfd not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.484642 4769 scope.go:117] "RemoveContainer" containerID="9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.484790 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537"} err="failed to get container status \"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\": rpc error: code = NotFound desc = could not find container \"9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537\": container with ID starting with 9b7d28a1a2ec34f5dd47da0eaac0a3b55ea2fa631d7ae99c0ea2798d3e7df537 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.484807 4769 scope.go:117] "RemoveContainer" containerID="4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.484951 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65"} err="failed to get container status \"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65\": rpc error: code = NotFound desc = could not find container \"4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65\": container with ID starting with 4167dbf1c868d99f4d3645fe32ba234290f566bbbb35a6493b03fa7543a39d65 not found: ID does not exist" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.502883 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.676533 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8bknc"] Oct 06 07:25:16 crc kubenswrapper[4769]: I1006 07:25:16.680282 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8bknc"] Oct 06 07:25:17 crc kubenswrapper[4769]: I1006 07:25:17.321739 4769 generic.go:334] "Generic (PLEG): container finished" podID="4124fae6-8611-4073-a1ee-f4b23f3b0208" containerID="5dbf5e951fc5b5277834a529f0095c22988ba2b92d93839fd949acb3c2fcbf91" exitCode=0 Oct 06 07:25:17 crc kubenswrapper[4769]: I1006 07:25:17.321803 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" event={"ID":"4124fae6-8611-4073-a1ee-f4b23f3b0208","Type":"ContainerDied","Data":"5dbf5e951fc5b5277834a529f0095c22988ba2b92d93839fd949acb3c2fcbf91"} Oct 06 07:25:17 crc kubenswrapper[4769]: I1006 07:25:17.321844 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" event={"ID":"4124fae6-8611-4073-a1ee-f4b23f3b0208","Type":"ContainerStarted","Data":"c757f686e8c300d62af0ca3a5c11ffa6b816c2d04e04d4aeb21822a3c0ba4da2"} Oct 06 07:25:18 crc kubenswrapper[4769]: I1006 07:25:18.183339 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="084bbba5-5940-4065-a799-2e6baff2338d" path="/var/lib/kubelet/pods/084bbba5-5940-4065-a799-2e6baff2338d/volumes" Oct 06 07:25:18 crc kubenswrapper[4769]: I1006 07:25:18.333515 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" event={"ID":"4124fae6-8611-4073-a1ee-f4b23f3b0208","Type":"ContainerStarted","Data":"bf981e3942af39aad3dead25a97a42a658569b069826d5fda77b91107ddc8a3b"} Oct 06 07:25:18 crc kubenswrapper[4769]: I1006 07:25:18.333554 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" event={"ID":"4124fae6-8611-4073-a1ee-f4b23f3b0208","Type":"ContainerStarted","Data":"9176cc79650c627dc23fdec56b614c9a016af19bbad97f0585217b851760a99c"} Oct 06 07:25:18 crc kubenswrapper[4769]: I1006 07:25:18.333565 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" event={"ID":"4124fae6-8611-4073-a1ee-f4b23f3b0208","Type":"ContainerStarted","Data":"bbc2a1b9ba420ed2f6b3ae5539cab2de07677b7a4306adde1b9bdef0ce54fce9"} Oct 06 07:25:18 crc kubenswrapper[4769]: I1006 07:25:18.333574 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" event={"ID":"4124fae6-8611-4073-a1ee-f4b23f3b0208","Type":"ContainerStarted","Data":"a26ebeb171ebda8b45cdb80593cd80bb7e5f0572e73fb33c12b5e11cfb7f8c6c"} Oct 06 07:25:18 crc kubenswrapper[4769]: I1006 07:25:18.333582 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" event={"ID":"4124fae6-8611-4073-a1ee-f4b23f3b0208","Type":"ContainerStarted","Data":"cb8c8fb8be006d950f634bf23eae49caabb122e7d4368973a2fd5ed1831b57b8"} Oct 06 07:25:18 crc kubenswrapper[4769]: I1006 07:25:18.333590 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" event={"ID":"4124fae6-8611-4073-a1ee-f4b23f3b0208","Type":"ContainerStarted","Data":"c324a4abb0b3a991cda9a9407f8bafae4c6d3bda585340202e0ddcd00ab0940b"} Oct 06 07:25:20 crc kubenswrapper[4769]: I1006 07:25:20.347749 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" event={"ID":"4124fae6-8611-4073-a1ee-f4b23f3b0208","Type":"ContainerStarted","Data":"54db53f920630e6c75df3e42fe2cd1eb39477f8733cdb3fccc3f400adf9a3e87"} Oct 06 07:25:22 crc kubenswrapper[4769]: I1006 07:25:22.245951 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:25:22 crc kubenswrapper[4769]: I1006 07:25:22.246618 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.124282 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" podUID="4af768db-1836-4f0b-a47f-1b5b609c5703" containerName="registry" containerID="cri-o://94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4" gracePeriod=30 Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.326972 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.367126 4769 generic.go:334] "Generic (PLEG): container finished" podID="4af768db-1836-4f0b-a47f-1b5b609c5703" containerID="94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4" exitCode=0 Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.367200 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" event={"ID":"4af768db-1836-4f0b-a47f-1b5b609c5703","Type":"ContainerDied","Data":"94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4"} Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.367233 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" event={"ID":"4af768db-1836-4f0b-a47f-1b5b609c5703","Type":"ContainerDied","Data":"ec682835aeaf4cc007230181d482b1dd74ffc9e12e637c324d9680f802adbc5d"} Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.367252 4769 scope.go:117] "RemoveContainer" containerID="94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.367346 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-h7lhw" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.375896 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" event={"ID":"4124fae6-8611-4073-a1ee-f4b23f3b0208","Type":"ContainerStarted","Data":"945947b2d3219ab0322a97ebe4199c583fddba69976b5c5d7f16e4ee4872bff3"} Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.376397 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.376435 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.388095 4769 scope.go:117] "RemoveContainer" containerID="94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4" Oct 06 07:25:23 crc kubenswrapper[4769]: E1006 07:25:23.388999 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4\": container with ID starting with 94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4 not found: ID does not exist" containerID="94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.389053 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4"} err="failed to get container status \"94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4\": rpc error: code = NotFound desc = could not find container \"94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4\": container with ID starting with 94cc8f4c48f7c9970b877b43138703ff9e187f93ea296060422d3f72dda730a4 not found: ID does not exist" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.412283 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.413611 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" podStartSLOduration=7.413601387 podStartE2EDuration="7.413601387s" podCreationTimestamp="2025-10-06 07:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:25:23.406008969 +0000 UTC m=+519.930290126" watchObservedRunningTime="2025-10-06 07:25:23.413601387 +0000 UTC m=+519.937882534" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.426911 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4af768db-1836-4f0b-a47f-1b5b609c5703-installation-pull-secrets\") pod \"4af768db-1836-4f0b-a47f-1b5b609c5703\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.427025 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9l6w\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-kube-api-access-v9l6w\") pod \"4af768db-1836-4f0b-a47f-1b5b609c5703\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.427068 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4af768db-1836-4f0b-a47f-1b5b609c5703-trusted-ca\") pod \"4af768db-1836-4f0b-a47f-1b5b609c5703\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.427121 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4af768db-1836-4f0b-a47f-1b5b609c5703-ca-trust-extracted\") pod \"4af768db-1836-4f0b-a47f-1b5b609c5703\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.427161 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-bound-sa-token\") pod \"4af768db-1836-4f0b-a47f-1b5b609c5703\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.427229 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-registry-tls\") pod \"4af768db-1836-4f0b-a47f-1b5b609c5703\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.427330 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4af768db-1836-4f0b-a47f-1b5b609c5703-registry-certificates\") pod \"4af768db-1836-4f0b-a47f-1b5b609c5703\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.427586 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4af768db-1836-4f0b-a47f-1b5b609c5703\" (UID: \"4af768db-1836-4f0b-a47f-1b5b609c5703\") " Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.429850 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4af768db-1836-4f0b-a47f-1b5b609c5703-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4af768db-1836-4f0b-a47f-1b5b609c5703" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.440511 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-kube-api-access-v9l6w" (OuterVolumeSpecName: "kube-api-access-v9l6w") pod "4af768db-1836-4f0b-a47f-1b5b609c5703" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703"). InnerVolumeSpecName "kube-api-access-v9l6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.440506 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4af768db-1836-4f0b-a47f-1b5b609c5703-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4af768db-1836-4f0b-a47f-1b5b609c5703" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.448633 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4af768db-1836-4f0b-a47f-1b5b609c5703-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4af768db-1836-4f0b-a47f-1b5b609c5703" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.449299 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4af768db-1836-4f0b-a47f-1b5b609c5703" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.449727 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4af768db-1836-4f0b-a47f-1b5b609c5703" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.458341 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4af768db-1836-4f0b-a47f-1b5b609c5703" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.461391 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4af768db-1836-4f0b-a47f-1b5b609c5703-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4af768db-1836-4f0b-a47f-1b5b609c5703" (UID: "4af768db-1836-4f0b-a47f-1b5b609c5703"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.528628 4769 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4af768db-1836-4f0b-a47f-1b5b609c5703-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.528665 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9l6w\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-kube-api-access-v9l6w\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.528674 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4af768db-1836-4f0b-a47f-1b5b609c5703-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.528686 4769 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4af768db-1836-4f0b-a47f-1b5b609c5703-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.528694 4769 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.528702 4769 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4af768db-1836-4f0b-a47f-1b5b609c5703-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.528711 4769 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4af768db-1836-4f0b-a47f-1b5b609c5703-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.704443 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h7lhw"] Oct 06 07:25:23 crc kubenswrapper[4769]: I1006 07:25:23.710730 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-h7lhw"] Oct 06 07:25:24 crc kubenswrapper[4769]: I1006 07:25:24.175852 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4af768db-1836-4f0b-a47f-1b5b609c5703" path="/var/lib/kubelet/pods/4af768db-1836-4f0b-a47f-1b5b609c5703/volumes" Oct 06 07:25:24 crc kubenswrapper[4769]: I1006 07:25:24.385956 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:24 crc kubenswrapper[4769]: I1006 07:25:24.429670 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:27 crc kubenswrapper[4769]: I1006 07:25:27.165605 4769 scope.go:117] "RemoveContainer" containerID="7040c00045ef3e10db0e433c802a4cfadffb3fb10f446ba9b7843be295d98ac3" Oct 06 07:25:27 crc kubenswrapper[4769]: E1006 07:25:27.166118 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-cjjvp_openshift-multus(3b98abd5-990e-494c-a2a5-526fae1bd5ec)\"" pod="openshift-multus/multus-cjjvp" podUID="3b98abd5-990e-494c-a2a5-526fae1bd5ec" Oct 06 07:25:42 crc kubenswrapper[4769]: I1006 07:25:42.166511 4769 scope.go:117] "RemoveContainer" containerID="7040c00045ef3e10db0e433c802a4cfadffb3fb10f446ba9b7843be295d98ac3" Oct 06 07:25:42 crc kubenswrapper[4769]: I1006 07:25:42.495237 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjjvp_3b98abd5-990e-494c-a2a5-526fae1bd5ec/kube-multus/2.log" Oct 06 07:25:42 crc kubenswrapper[4769]: I1006 07:25:42.496018 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjjvp_3b98abd5-990e-494c-a2a5-526fae1bd5ec/kube-multus/1.log" Oct 06 07:25:42 crc kubenswrapper[4769]: I1006 07:25:42.496075 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cjjvp" event={"ID":"3b98abd5-990e-494c-a2a5-526fae1bd5ec","Type":"ContainerStarted","Data":"bec4c185589563e53f280dd16b47b9a8a521fbd432d08af034d26ab283d9f48d"} Oct 06 07:25:44 crc kubenswrapper[4769]: I1006 07:25:44.340726 4769 scope.go:117] "RemoveContainer" containerID="44ded000e408a808794358baf6c59192039ab2b39297b20a4ab81d9115163b08" Oct 06 07:25:44 crc kubenswrapper[4769]: I1006 07:25:44.353601 4769 scope.go:117] "RemoveContainer" containerID="1e70a050c86df40da0a04c1eeab4210f4f4112062f0877bd98c87639c6fbe1af" Oct 06 07:25:44 crc kubenswrapper[4769]: I1006 07:25:44.365558 4769 scope.go:117] "RemoveContainer" containerID="94c9222627a4b973238ae17009e90dcd83f32bcf863db3aad235d41287b70f6b" Oct 06 07:25:44 crc kubenswrapper[4769]: I1006 07:25:44.510363 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cjjvp_3b98abd5-990e-494c-a2a5-526fae1bd5ec/kube-multus/2.log" Oct 06 07:25:46 crc kubenswrapper[4769]: I1006 07:25:46.540432 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7fdks" Oct 06 07:25:52 crc kubenswrapper[4769]: I1006 07:25:52.246063 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:25:52 crc kubenswrapper[4769]: I1006 07:25:52.247356 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:25:52 crc kubenswrapper[4769]: I1006 07:25:52.983779 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c"] Oct 06 07:25:52 crc kubenswrapper[4769]: E1006 07:25:52.984247 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af768db-1836-4f0b-a47f-1b5b609c5703" containerName="registry" Oct 06 07:25:52 crc kubenswrapper[4769]: I1006 07:25:52.984258 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af768db-1836-4f0b-a47f-1b5b609c5703" containerName="registry" Oct 06 07:25:52 crc kubenswrapper[4769]: I1006 07:25:52.984357 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4af768db-1836-4f0b-a47f-1b5b609c5703" containerName="registry" Oct 06 07:25:52 crc kubenswrapper[4769]: I1006 07:25:52.985038 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:25:52 crc kubenswrapper[4769]: I1006 07:25:52.988555 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 06 07:25:53 crc kubenswrapper[4769]: I1006 07:25:53.005415 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c"] Oct 06 07:25:53 crc kubenswrapper[4769]: I1006 07:25:53.054136 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4396dd0e-d481-430a-a35a-73278b5e925f-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c\" (UID: \"4396dd0e-d481-430a-a35a-73278b5e925f\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:25:53 crc kubenswrapper[4769]: I1006 07:25:53.054195 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4396dd0e-d481-430a-a35a-73278b5e925f-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c\" (UID: \"4396dd0e-d481-430a-a35a-73278b5e925f\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:25:53 crc kubenswrapper[4769]: I1006 07:25:53.054275 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np6xh\" (UniqueName: \"kubernetes.io/projected/4396dd0e-d481-430a-a35a-73278b5e925f-kube-api-access-np6xh\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c\" (UID: \"4396dd0e-d481-430a-a35a-73278b5e925f\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:25:53 crc kubenswrapper[4769]: I1006 07:25:53.156126 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4396dd0e-d481-430a-a35a-73278b5e925f-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c\" (UID: \"4396dd0e-d481-430a-a35a-73278b5e925f\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:25:53 crc kubenswrapper[4769]: I1006 07:25:53.156194 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4396dd0e-d481-430a-a35a-73278b5e925f-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c\" (UID: \"4396dd0e-d481-430a-a35a-73278b5e925f\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:25:53 crc kubenswrapper[4769]: I1006 07:25:53.156238 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np6xh\" (UniqueName: \"kubernetes.io/projected/4396dd0e-d481-430a-a35a-73278b5e925f-kube-api-access-np6xh\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c\" (UID: \"4396dd0e-d481-430a-a35a-73278b5e925f\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:25:53 crc kubenswrapper[4769]: I1006 07:25:53.156692 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4396dd0e-d481-430a-a35a-73278b5e925f-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c\" (UID: \"4396dd0e-d481-430a-a35a-73278b5e925f\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:25:53 crc kubenswrapper[4769]: I1006 07:25:53.156938 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4396dd0e-d481-430a-a35a-73278b5e925f-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c\" (UID: \"4396dd0e-d481-430a-a35a-73278b5e925f\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:25:53 crc kubenswrapper[4769]: I1006 07:25:53.185168 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np6xh\" (UniqueName: \"kubernetes.io/projected/4396dd0e-d481-430a-a35a-73278b5e925f-kube-api-access-np6xh\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c\" (UID: \"4396dd0e-d481-430a-a35a-73278b5e925f\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:25:53 crc kubenswrapper[4769]: I1006 07:25:53.353388 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:25:53 crc kubenswrapper[4769]: I1006 07:25:53.556853 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c"] Oct 06 07:25:53 crc kubenswrapper[4769]: W1006 07:25:53.567883 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4396dd0e_d481_430a_a35a_73278b5e925f.slice/crio-4c8e338f1f705a6be0b81b0ad6a69c4546e2ac76f18f2631c61fe781cd4a89bb WatchSource:0}: Error finding container 4c8e338f1f705a6be0b81b0ad6a69c4546e2ac76f18f2631c61fe781cd4a89bb: Status 404 returned error can't find the container with id 4c8e338f1f705a6be0b81b0ad6a69c4546e2ac76f18f2631c61fe781cd4a89bb Oct 06 07:25:54 crc kubenswrapper[4769]: I1006 07:25:54.574926 4769 generic.go:334] "Generic (PLEG): container finished" podID="4396dd0e-d481-430a-a35a-73278b5e925f" containerID="0b67f802e1ddd315bc0ae225a2efa09363a99c445ec039b913a2c114752a1f7e" exitCode=0 Oct 06 07:25:54 crc kubenswrapper[4769]: I1006 07:25:54.575011 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" event={"ID":"4396dd0e-d481-430a-a35a-73278b5e925f","Type":"ContainerDied","Data":"0b67f802e1ddd315bc0ae225a2efa09363a99c445ec039b913a2c114752a1f7e"} Oct 06 07:25:54 crc kubenswrapper[4769]: I1006 07:25:54.575364 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" event={"ID":"4396dd0e-d481-430a-a35a-73278b5e925f","Type":"ContainerStarted","Data":"4c8e338f1f705a6be0b81b0ad6a69c4546e2ac76f18f2631c61fe781cd4a89bb"} Oct 06 07:25:56 crc kubenswrapper[4769]: I1006 07:25:56.590191 4769 generic.go:334] "Generic (PLEG): container finished" podID="4396dd0e-d481-430a-a35a-73278b5e925f" containerID="2a53c9b0a7e4e99328e8a870aea70d692e48d49576841d2960fb816f88afacea" exitCode=0 Oct 06 07:25:56 crc kubenswrapper[4769]: I1006 07:25:56.590297 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" event={"ID":"4396dd0e-d481-430a-a35a-73278b5e925f","Type":"ContainerDied","Data":"2a53c9b0a7e4e99328e8a870aea70d692e48d49576841d2960fb816f88afacea"} Oct 06 07:25:57 crc kubenswrapper[4769]: I1006 07:25:57.602228 4769 generic.go:334] "Generic (PLEG): container finished" podID="4396dd0e-d481-430a-a35a-73278b5e925f" containerID="05c8b7a5d2d67f5eec39bc53d9040bd8d5dce7c127a60b4d0f0e01a078b563ae" exitCode=0 Oct 06 07:25:57 crc kubenswrapper[4769]: I1006 07:25:57.602301 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" event={"ID":"4396dd0e-d481-430a-a35a-73278b5e925f","Type":"ContainerDied","Data":"05c8b7a5d2d67f5eec39bc53d9040bd8d5dce7c127a60b4d0f0e01a078b563ae"} Oct 06 07:25:58 crc kubenswrapper[4769]: I1006 07:25:58.903029 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:25:59 crc kubenswrapper[4769]: I1006 07:25:59.043232 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4396dd0e-d481-430a-a35a-73278b5e925f-util\") pod \"4396dd0e-d481-430a-a35a-73278b5e925f\" (UID: \"4396dd0e-d481-430a-a35a-73278b5e925f\") " Oct 06 07:25:59 crc kubenswrapper[4769]: I1006 07:25:59.043376 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4396dd0e-d481-430a-a35a-73278b5e925f-bundle\") pod \"4396dd0e-d481-430a-a35a-73278b5e925f\" (UID: \"4396dd0e-d481-430a-a35a-73278b5e925f\") " Oct 06 07:25:59 crc kubenswrapper[4769]: I1006 07:25:59.044243 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4396dd0e-d481-430a-a35a-73278b5e925f-bundle" (OuterVolumeSpecName: "bundle") pod "4396dd0e-d481-430a-a35a-73278b5e925f" (UID: "4396dd0e-d481-430a-a35a-73278b5e925f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:25:59 crc kubenswrapper[4769]: I1006 07:25:59.044324 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np6xh\" (UniqueName: \"kubernetes.io/projected/4396dd0e-d481-430a-a35a-73278b5e925f-kube-api-access-np6xh\") pod \"4396dd0e-d481-430a-a35a-73278b5e925f\" (UID: \"4396dd0e-d481-430a-a35a-73278b5e925f\") " Oct 06 07:25:59 crc kubenswrapper[4769]: I1006 07:25:59.045738 4769 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4396dd0e-d481-430a-a35a-73278b5e925f-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:59 crc kubenswrapper[4769]: I1006 07:25:59.053502 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4396dd0e-d481-430a-a35a-73278b5e925f-kube-api-access-np6xh" (OuterVolumeSpecName: "kube-api-access-np6xh") pod "4396dd0e-d481-430a-a35a-73278b5e925f" (UID: "4396dd0e-d481-430a-a35a-73278b5e925f"). InnerVolumeSpecName "kube-api-access-np6xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:25:59 crc kubenswrapper[4769]: I1006 07:25:59.057416 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4396dd0e-d481-430a-a35a-73278b5e925f-util" (OuterVolumeSpecName: "util") pod "4396dd0e-d481-430a-a35a-73278b5e925f" (UID: "4396dd0e-d481-430a-a35a-73278b5e925f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:25:59 crc kubenswrapper[4769]: I1006 07:25:59.147141 4769 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4396dd0e-d481-430a-a35a-73278b5e925f-util\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:59 crc kubenswrapper[4769]: I1006 07:25:59.147204 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np6xh\" (UniqueName: \"kubernetes.io/projected/4396dd0e-d481-430a-a35a-73278b5e925f-kube-api-access-np6xh\") on node \"crc\" DevicePath \"\"" Oct 06 07:25:59 crc kubenswrapper[4769]: I1006 07:25:59.624121 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" event={"ID":"4396dd0e-d481-430a-a35a-73278b5e925f","Type":"ContainerDied","Data":"4c8e338f1f705a6be0b81b0ad6a69c4546e2ac76f18f2631c61fe781cd4a89bb"} Oct 06 07:25:59 crc kubenswrapper[4769]: I1006 07:25:59.624496 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c8e338f1f705a6be0b81b0ad6a69c4546e2ac76f18f2631c61fe781cd4a89bb" Oct 06 07:25:59 crc kubenswrapper[4769]: I1006 07:25:59.624314 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c" Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.511323 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-zdtf9"] Oct 06 07:26:00 crc kubenswrapper[4769]: E1006 07:26:00.511543 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4396dd0e-d481-430a-a35a-73278b5e925f" containerName="extract" Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.511553 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4396dd0e-d481-430a-a35a-73278b5e925f" containerName="extract" Oct 06 07:26:00 crc kubenswrapper[4769]: E1006 07:26:00.511566 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4396dd0e-d481-430a-a35a-73278b5e925f" containerName="util" Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.511571 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4396dd0e-d481-430a-a35a-73278b5e925f" containerName="util" Oct 06 07:26:00 crc kubenswrapper[4769]: E1006 07:26:00.511581 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4396dd0e-d481-430a-a35a-73278b5e925f" containerName="pull" Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.511587 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4396dd0e-d481-430a-a35a-73278b5e925f" containerName="pull" Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.511682 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4396dd0e-d481-430a-a35a-73278b5e925f" containerName="extract" Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.512053 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-zdtf9" Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.513387 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-sfvh4" Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.514301 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.514528 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.519357 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-zdtf9"] Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.670108 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9hbp\" (UniqueName: \"kubernetes.io/projected/a25249c6-ee28-4d96-a2c7-077e0a0bb198-kube-api-access-b9hbp\") pod \"nmstate-operator-858ddd8f98-zdtf9\" (UID: \"a25249c6-ee28-4d96-a2c7-077e0a0bb198\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-zdtf9" Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.771064 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9hbp\" (UniqueName: \"kubernetes.io/projected/a25249c6-ee28-4d96-a2c7-077e0a0bb198-kube-api-access-b9hbp\") pod \"nmstate-operator-858ddd8f98-zdtf9\" (UID: \"a25249c6-ee28-4d96-a2c7-077e0a0bb198\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-zdtf9" Oct 06 07:26:00 crc kubenswrapper[4769]: I1006 07:26:00.828233 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9hbp\" (UniqueName: \"kubernetes.io/projected/a25249c6-ee28-4d96-a2c7-077e0a0bb198-kube-api-access-b9hbp\") pod \"nmstate-operator-858ddd8f98-zdtf9\" (UID: \"a25249c6-ee28-4d96-a2c7-077e0a0bb198\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-zdtf9" Oct 06 07:26:01 crc kubenswrapper[4769]: I1006 07:26:01.127555 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-zdtf9" Oct 06 07:26:01 crc kubenswrapper[4769]: I1006 07:26:01.350638 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-zdtf9"] Oct 06 07:26:01 crc kubenswrapper[4769]: W1006 07:26:01.357384 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda25249c6_ee28_4d96_a2c7_077e0a0bb198.slice/crio-c669016d891311593e1b68e6682618ff0ad5409dbc2857e240e2369cde082a79 WatchSource:0}: Error finding container c669016d891311593e1b68e6682618ff0ad5409dbc2857e240e2369cde082a79: Status 404 returned error can't find the container with id c669016d891311593e1b68e6682618ff0ad5409dbc2857e240e2369cde082a79 Oct 06 07:26:01 crc kubenswrapper[4769]: I1006 07:26:01.633960 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-zdtf9" event={"ID":"a25249c6-ee28-4d96-a2c7-077e0a0bb198","Type":"ContainerStarted","Data":"c669016d891311593e1b68e6682618ff0ad5409dbc2857e240e2369cde082a79"} Oct 06 07:26:03 crc kubenswrapper[4769]: I1006 07:26:03.650344 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-zdtf9" event={"ID":"a25249c6-ee28-4d96-a2c7-077e0a0bb198","Type":"ContainerStarted","Data":"fbbaa9cb11353a13cfbdee2cef833c72b53ff05075d3c9c2a5082047e14a6656"} Oct 06 07:26:03 crc kubenswrapper[4769]: I1006 07:26:03.664523 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-zdtf9" podStartSLOduration=1.8189625999999999 podStartE2EDuration="3.664505981s" podCreationTimestamp="2025-10-06 07:26:00 +0000 UTC" firstStartedPulling="2025-10-06 07:26:01.359489888 +0000 UTC m=+557.883771035" lastFinishedPulling="2025-10-06 07:26:03.205033269 +0000 UTC m=+559.729314416" observedRunningTime="2025-10-06 07:26:03.664056048 +0000 UTC m=+560.188337235" watchObservedRunningTime="2025-10-06 07:26:03.664505981 +0000 UTC m=+560.188787148" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.781196 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-r8v6q"] Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.782007 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-r8v6q" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.783374 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-tkqnj" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.784906 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f"] Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.785708 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.790136 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-r8v6q"] Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.790375 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.804686 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-ddkqb"] Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.805403 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.822527 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f"] Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.907305 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7"] Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.907914 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.912793 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.912927 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ljgmw" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.913008 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.918963 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7"] Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.924300 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dd69fd45-0ee0-4358-8f30-39e251e3a27f-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-brg5f\" (UID: \"dd69fd45-0ee0-4358-8f30-39e251e3a27f\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.924332 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74jjt\" (UniqueName: \"kubernetes.io/projected/6ab7169e-e46e-4e3b-a21a-7bd332467bb6-kube-api-access-74jjt\") pod \"nmstate-metrics-fdff9cb8d-r8v6q\" (UID: \"6ab7169e-e46e-4e3b-a21a-7bd332467bb6\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-r8v6q" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.924362 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/358b3c8a-e981-4525-af7a-bbf05421b9fa-dbus-socket\") pod \"nmstate-handler-ddkqb\" (UID: \"358b3c8a-e981-4525-af7a-bbf05421b9fa\") " pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.924390 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/358b3c8a-e981-4525-af7a-bbf05421b9fa-nmstate-lock\") pod \"nmstate-handler-ddkqb\" (UID: \"358b3c8a-e981-4525-af7a-bbf05421b9fa\") " pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.924450 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8252p\" (UniqueName: \"kubernetes.io/projected/dd69fd45-0ee0-4358-8f30-39e251e3a27f-kube-api-access-8252p\") pod \"nmstate-webhook-6cdbc54649-brg5f\" (UID: \"dd69fd45-0ee0-4358-8f30-39e251e3a27f\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.924479 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/358b3c8a-e981-4525-af7a-bbf05421b9fa-ovs-socket\") pod \"nmstate-handler-ddkqb\" (UID: \"358b3c8a-e981-4525-af7a-bbf05421b9fa\") " pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:04 crc kubenswrapper[4769]: I1006 07:26:04.924496 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snbdx\" (UniqueName: \"kubernetes.io/projected/358b3c8a-e981-4525-af7a-bbf05421b9fa-kube-api-access-snbdx\") pod \"nmstate-handler-ddkqb\" (UID: \"358b3c8a-e981-4525-af7a-bbf05421b9fa\") " pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025328 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dd69fd45-0ee0-4358-8f30-39e251e3a27f-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-brg5f\" (UID: \"dd69fd45-0ee0-4358-8f30-39e251e3a27f\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025374 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74jjt\" (UniqueName: \"kubernetes.io/projected/6ab7169e-e46e-4e3b-a21a-7bd332467bb6-kube-api-access-74jjt\") pod \"nmstate-metrics-fdff9cb8d-r8v6q\" (UID: \"6ab7169e-e46e-4e3b-a21a-7bd332467bb6\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-r8v6q" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025398 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/358b3c8a-e981-4525-af7a-bbf05421b9fa-dbus-socket\") pod \"nmstate-handler-ddkqb\" (UID: \"358b3c8a-e981-4525-af7a-bbf05421b9fa\") " pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025463 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/358b3c8a-e981-4525-af7a-bbf05421b9fa-nmstate-lock\") pod \"nmstate-handler-ddkqb\" (UID: \"358b3c8a-e981-4525-af7a-bbf05421b9fa\") " pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025491 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/802ef315-9d05-4501-ac19-e994a822fec7-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-p7rg7\" (UID: \"802ef315-9d05-4501-ac19-e994a822fec7\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025527 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8252p\" (UniqueName: \"kubernetes.io/projected/dd69fd45-0ee0-4358-8f30-39e251e3a27f-kube-api-access-8252p\") pod \"nmstate-webhook-6cdbc54649-brg5f\" (UID: \"dd69fd45-0ee0-4358-8f30-39e251e3a27f\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025551 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/802ef315-9d05-4501-ac19-e994a822fec7-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-p7rg7\" (UID: \"802ef315-9d05-4501-ac19-e994a822fec7\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025567 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsjxb\" (UniqueName: \"kubernetes.io/projected/802ef315-9d05-4501-ac19-e994a822fec7-kube-api-access-bsjxb\") pod \"nmstate-console-plugin-6b874cbd85-p7rg7\" (UID: \"802ef315-9d05-4501-ac19-e994a822fec7\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025593 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/358b3c8a-e981-4525-af7a-bbf05421b9fa-ovs-socket\") pod \"nmstate-handler-ddkqb\" (UID: \"358b3c8a-e981-4525-af7a-bbf05421b9fa\") " pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025613 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snbdx\" (UniqueName: \"kubernetes.io/projected/358b3c8a-e981-4525-af7a-bbf05421b9fa-kube-api-access-snbdx\") pod \"nmstate-handler-ddkqb\" (UID: \"358b3c8a-e981-4525-af7a-bbf05421b9fa\") " pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025634 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/358b3c8a-e981-4525-af7a-bbf05421b9fa-nmstate-lock\") pod \"nmstate-handler-ddkqb\" (UID: \"358b3c8a-e981-4525-af7a-bbf05421b9fa\") " pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025757 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/358b3c8a-e981-4525-af7a-bbf05421b9fa-ovs-socket\") pod \"nmstate-handler-ddkqb\" (UID: \"358b3c8a-e981-4525-af7a-bbf05421b9fa\") " pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.025919 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/358b3c8a-e981-4525-af7a-bbf05421b9fa-dbus-socket\") pod \"nmstate-handler-ddkqb\" (UID: \"358b3c8a-e981-4525-af7a-bbf05421b9fa\") " pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.042363 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dd69fd45-0ee0-4358-8f30-39e251e3a27f-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-brg5f\" (UID: \"dd69fd45-0ee0-4358-8f30-39e251e3a27f\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.046260 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snbdx\" (UniqueName: \"kubernetes.io/projected/358b3c8a-e981-4525-af7a-bbf05421b9fa-kube-api-access-snbdx\") pod \"nmstate-handler-ddkqb\" (UID: \"358b3c8a-e981-4525-af7a-bbf05421b9fa\") " pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.046273 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8252p\" (UniqueName: \"kubernetes.io/projected/dd69fd45-0ee0-4358-8f30-39e251e3a27f-kube-api-access-8252p\") pod \"nmstate-webhook-6cdbc54649-brg5f\" (UID: \"dd69fd45-0ee0-4358-8f30-39e251e3a27f\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.052030 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74jjt\" (UniqueName: \"kubernetes.io/projected/6ab7169e-e46e-4e3b-a21a-7bd332467bb6-kube-api-access-74jjt\") pod \"nmstate-metrics-fdff9cb8d-r8v6q\" (UID: \"6ab7169e-e46e-4e3b-a21a-7bd332467bb6\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-r8v6q" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.104359 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-r8v6q" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.116344 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.117938 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-8d748fb98-7bd6r"] Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.118597 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.127048 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/802ef315-9d05-4501-ac19-e994a822fec7-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-p7rg7\" (UID: \"802ef315-9d05-4501-ac19-e994a822fec7\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.127104 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/802ef315-9d05-4501-ac19-e994a822fec7-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-p7rg7\" (UID: \"802ef315-9d05-4501-ac19-e994a822fec7\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.127122 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsjxb\" (UniqueName: \"kubernetes.io/projected/802ef315-9d05-4501-ac19-e994a822fec7-kube-api-access-bsjxb\") pod \"nmstate-console-plugin-6b874cbd85-p7rg7\" (UID: \"802ef315-9d05-4501-ac19-e994a822fec7\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" Oct 06 07:26:05 crc kubenswrapper[4769]: E1006 07:26:05.129609 4769 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Oct 06 07:26:05 crc kubenswrapper[4769]: E1006 07:26:05.129692 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/802ef315-9d05-4501-ac19-e994a822fec7-plugin-serving-cert podName:802ef315-9d05-4501-ac19-e994a822fec7 nodeName:}" failed. No retries permitted until 2025-10-06 07:26:05.629674421 +0000 UTC m=+562.153955568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/802ef315-9d05-4501-ac19-e994a822fec7-plugin-serving-cert") pod "nmstate-console-plugin-6b874cbd85-p7rg7" (UID: "802ef315-9d05-4501-ac19-e994a822fec7") : secret "plugin-serving-cert" not found Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.130512 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8d748fb98-7bd6r"] Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.131190 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/802ef315-9d05-4501-ac19-e994a822fec7-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-p7rg7\" (UID: \"802ef315-9d05-4501-ac19-e994a822fec7\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.137586 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.168087 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsjxb\" (UniqueName: \"kubernetes.io/projected/802ef315-9d05-4501-ac19-e994a822fec7-kube-api-access-bsjxb\") pod \"nmstate-console-plugin-6b874cbd85-p7rg7\" (UID: \"802ef315-9d05-4501-ac19-e994a822fec7\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.228451 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zq7v\" (UniqueName: \"kubernetes.io/projected/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-kube-api-access-8zq7v\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.228521 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-console-config\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.228649 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-service-ca\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.228701 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-trusted-ca-bundle\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.228841 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-console-serving-cert\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.228934 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-console-oauth-config\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.229001 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-oauth-serving-cert\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.330316 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-console-serving-cert\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.330721 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-console-oauth-config\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.330756 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-oauth-serving-cert\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.330818 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zq7v\" (UniqueName: \"kubernetes.io/projected/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-kube-api-access-8zq7v\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.330878 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-console-config\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.330900 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-service-ca\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.330932 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-trusted-ca-bundle\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.333045 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-oauth-serving-cert\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.334638 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-console-config\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.334859 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-trusted-ca-bundle\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.335084 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-service-ca\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.340900 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-console-oauth-config\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.342110 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-console-serving-cert\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.350730 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zq7v\" (UniqueName: \"kubernetes.io/projected/61bc3e32-fe07-4a01-aa14-bc6b01331b7c-kube-api-access-8zq7v\") pod \"console-8d748fb98-7bd6r\" (UID: \"61bc3e32-fe07-4a01-aa14-bc6b01331b7c\") " pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.368061 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f"] Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.405976 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-r8v6q"] Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.521118 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.635960 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/802ef315-9d05-4501-ac19-e994a822fec7-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-p7rg7\" (UID: \"802ef315-9d05-4501-ac19-e994a822fec7\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.639401 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/802ef315-9d05-4501-ac19-e994a822fec7-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-p7rg7\" (UID: \"802ef315-9d05-4501-ac19-e994a822fec7\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.662969 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ddkqb" event={"ID":"358b3c8a-e981-4525-af7a-bbf05421b9fa","Type":"ContainerStarted","Data":"c19a1955f051f54748e6afb3c816253a94b7cda80d7a2ecb6766320cd7e58e16"} Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.664278 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" event={"ID":"dd69fd45-0ee0-4358-8f30-39e251e3a27f","Type":"ContainerStarted","Data":"d5737265e903a746f6d65f9ebc07f35efdd054f10bb84f4054e1862aa808dfce"} Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.665691 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-r8v6q" event={"ID":"6ab7169e-e46e-4e3b-a21a-7bd332467bb6","Type":"ContainerStarted","Data":"1ca193fb450d195843c73199be5b91aaaab3ea1968f0880568825ca9d35599f8"} Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.833654 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" Oct 06 07:26:05 crc kubenswrapper[4769]: I1006 07:26:05.986600 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-8d748fb98-7bd6r"] Oct 06 07:26:06 crc kubenswrapper[4769]: I1006 07:26:06.070874 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7"] Oct 06 07:26:06 crc kubenswrapper[4769]: W1006 07:26:06.083126 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod802ef315_9d05_4501_ac19_e994a822fec7.slice/crio-0b76481fa5fb49bf87e3b45d0f6eb8db4b3c6cb99951c6fb970a3a9ae6c4f093 WatchSource:0}: Error finding container 0b76481fa5fb49bf87e3b45d0f6eb8db4b3c6cb99951c6fb970a3a9ae6c4f093: Status 404 returned error can't find the container with id 0b76481fa5fb49bf87e3b45d0f6eb8db4b3c6cb99951c6fb970a3a9ae6c4f093 Oct 06 07:26:06 crc kubenswrapper[4769]: I1006 07:26:06.685740 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8d748fb98-7bd6r" event={"ID":"61bc3e32-fe07-4a01-aa14-bc6b01331b7c","Type":"ContainerStarted","Data":"9876c6d0f8d7f27a9dc492b0f5d65d63b2b7dc3f6d5919644a5488f33eb23a02"} Oct 06 07:26:06 crc kubenswrapper[4769]: I1006 07:26:06.685800 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-8d748fb98-7bd6r" event={"ID":"61bc3e32-fe07-4a01-aa14-bc6b01331b7c","Type":"ContainerStarted","Data":"56a4df813dee8834871dd2a21d2f63c0ccdcdf78d464395eeb3f7d4d9786996b"} Oct 06 07:26:06 crc kubenswrapper[4769]: I1006 07:26:06.689251 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" event={"ID":"802ef315-9d05-4501-ac19-e994a822fec7","Type":"ContainerStarted","Data":"0b76481fa5fb49bf87e3b45d0f6eb8db4b3c6cb99951c6fb970a3a9ae6c4f093"} Oct 06 07:26:06 crc kubenswrapper[4769]: I1006 07:26:06.709568 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-8d748fb98-7bd6r" podStartSLOduration=1.709551783 podStartE2EDuration="1.709551783s" podCreationTimestamp="2025-10-06 07:26:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:26:06.707170988 +0000 UTC m=+563.231452135" watchObservedRunningTime="2025-10-06 07:26:06.709551783 +0000 UTC m=+563.233832930" Oct 06 07:26:08 crc kubenswrapper[4769]: I1006 07:26:08.702791 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" event={"ID":"dd69fd45-0ee0-4358-8f30-39e251e3a27f","Type":"ContainerStarted","Data":"6e060b7c26963275240694cf925732ed91713c03de01e6533f36d3a3e4b575da"} Oct 06 07:26:08 crc kubenswrapper[4769]: I1006 07:26:08.703722 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" Oct 06 07:26:08 crc kubenswrapper[4769]: I1006 07:26:08.706339 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-r8v6q" event={"ID":"6ab7169e-e46e-4e3b-a21a-7bd332467bb6","Type":"ContainerStarted","Data":"4c4057b0ca1ebaefea231143f0e4810940c35e40efb525de4a985c81d57cce47"} Oct 06 07:26:08 crc kubenswrapper[4769]: I1006 07:26:08.708922 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-ddkqb" event={"ID":"358b3c8a-e981-4525-af7a-bbf05421b9fa","Type":"ContainerStarted","Data":"58920183297f75d44c0027b1a253c9501da492c3d34530d349a50e7acccc147a"} Oct 06 07:26:08 crc kubenswrapper[4769]: I1006 07:26:08.709175 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:08 crc kubenswrapper[4769]: I1006 07:26:08.716272 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" podStartSLOduration=2.450715092 podStartE2EDuration="4.71625183s" podCreationTimestamp="2025-10-06 07:26:04 +0000 UTC" firstStartedPulling="2025-10-06 07:26:05.381380055 +0000 UTC m=+561.905661202" lastFinishedPulling="2025-10-06 07:26:07.646916793 +0000 UTC m=+564.171197940" observedRunningTime="2025-10-06 07:26:08.715078197 +0000 UTC m=+565.239359374" watchObservedRunningTime="2025-10-06 07:26:08.71625183 +0000 UTC m=+565.240532977" Oct 06 07:26:08 crc kubenswrapper[4769]: I1006 07:26:08.734296 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-ddkqb" podStartSLOduration=2.267460618 podStartE2EDuration="4.734277465s" podCreationTimestamp="2025-10-06 07:26:04 +0000 UTC" firstStartedPulling="2025-10-06 07:26:05.194388419 +0000 UTC m=+561.718669556" lastFinishedPulling="2025-10-06 07:26:07.661205256 +0000 UTC m=+564.185486403" observedRunningTime="2025-10-06 07:26:08.730994415 +0000 UTC m=+565.255275572" watchObservedRunningTime="2025-10-06 07:26:08.734277465 +0000 UTC m=+565.258558622" Oct 06 07:26:09 crc kubenswrapper[4769]: I1006 07:26:09.716124 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" event={"ID":"802ef315-9d05-4501-ac19-e994a822fec7","Type":"ContainerStarted","Data":"6e3eb88be24ca15812662a1a4ebcb7df59049cb2ad7ca4410611c0beb396d146"} Oct 06 07:26:09 crc kubenswrapper[4769]: I1006 07:26:09.732799 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-p7rg7" podStartSLOduration=3.117254192 podStartE2EDuration="5.732772405s" podCreationTimestamp="2025-10-06 07:26:04 +0000 UTC" firstStartedPulling="2025-10-06 07:26:06.089399656 +0000 UTC m=+562.613680823" lastFinishedPulling="2025-10-06 07:26:08.704917889 +0000 UTC m=+565.229199036" observedRunningTime="2025-10-06 07:26:09.731823929 +0000 UTC m=+566.256105086" watchObservedRunningTime="2025-10-06 07:26:09.732772405 +0000 UTC m=+566.257053572" Oct 06 07:26:10 crc kubenswrapper[4769]: I1006 07:26:10.723842 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-r8v6q" event={"ID":"6ab7169e-e46e-4e3b-a21a-7bd332467bb6","Type":"ContainerStarted","Data":"a991a740d579fa88464f7c87c7087238258ee9cc72c716b952eeec341b9fd8b6"} Oct 06 07:26:10 crc kubenswrapper[4769]: I1006 07:26:10.744542 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-r8v6q" podStartSLOduration=2.480066578 podStartE2EDuration="6.744515349s" podCreationTimestamp="2025-10-06 07:26:04 +0000 UTC" firstStartedPulling="2025-10-06 07:26:05.413385445 +0000 UTC m=+561.937666602" lastFinishedPulling="2025-10-06 07:26:09.677834226 +0000 UTC m=+566.202115373" observedRunningTime="2025-10-06 07:26:10.738373271 +0000 UTC m=+567.262654458" watchObservedRunningTime="2025-10-06 07:26:10.744515349 +0000 UTC m=+567.268796536" Oct 06 07:26:15 crc kubenswrapper[4769]: I1006 07:26:15.177905 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-ddkqb" Oct 06 07:26:15 crc kubenswrapper[4769]: I1006 07:26:15.521739 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:15 crc kubenswrapper[4769]: I1006 07:26:15.521951 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:15 crc kubenswrapper[4769]: I1006 07:26:15.528891 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:15 crc kubenswrapper[4769]: I1006 07:26:15.759577 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-8d748fb98-7bd6r" Oct 06 07:26:15 crc kubenswrapper[4769]: I1006 07:26:15.873883 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-lsg5p"] Oct 06 07:26:22 crc kubenswrapper[4769]: I1006 07:26:22.245840 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:26:22 crc kubenswrapper[4769]: I1006 07:26:22.246536 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:26:22 crc kubenswrapper[4769]: I1006 07:26:22.246639 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:26:22 crc kubenswrapper[4769]: I1006 07:26:22.247855 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"110402474b39777a830d4db80b735713b2c2b3181e687b4bbba006beefb9f9e9"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 07:26:22 crc kubenswrapper[4769]: I1006 07:26:22.248048 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://110402474b39777a830d4db80b735713b2c2b3181e687b4bbba006beefb9f9e9" gracePeriod=600 Oct 06 07:26:22 crc kubenswrapper[4769]: I1006 07:26:22.801570 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="110402474b39777a830d4db80b735713b2c2b3181e687b4bbba006beefb9f9e9" exitCode=0 Oct 06 07:26:22 crc kubenswrapper[4769]: I1006 07:26:22.801609 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"110402474b39777a830d4db80b735713b2c2b3181e687b4bbba006beefb9f9e9"} Oct 06 07:26:22 crc kubenswrapper[4769]: I1006 07:26:22.801922 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"d8e933a4c91e9e393dfdc1657ae1bf35eec6b067aa89c28817b282eafc59971e"} Oct 06 07:26:22 crc kubenswrapper[4769]: I1006 07:26:22.801947 4769 scope.go:117] "RemoveContainer" containerID="11e59b2b2ebfa1dcc1e964ac9bc1a81efea85c20205e4795427a2b0a5e17c9e6" Oct 06 07:26:25 crc kubenswrapper[4769]: I1006 07:26:25.125649 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-brg5f" Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.691491 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6"] Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.695377 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.699944 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.707295 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6"] Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.851977 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6\" (UID: \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.852291 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6\" (UID: \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.852395 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b8b4\" (UniqueName: \"kubernetes.io/projected/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-kube-api-access-6b8b4\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6\" (UID: \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.954044 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6\" (UID: \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.954085 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6\" (UID: \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.954115 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b8b4\" (UniqueName: \"kubernetes.io/projected/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-kube-api-access-6b8b4\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6\" (UID: \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.954749 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6\" (UID: \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.954912 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6\" (UID: \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:38 crc kubenswrapper[4769]: I1006 07:26:38.971384 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b8b4\" (UniqueName: \"kubernetes.io/projected/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-kube-api-access-6b8b4\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6\" (UID: \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:39 crc kubenswrapper[4769]: I1006 07:26:39.013403 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:39 crc kubenswrapper[4769]: I1006 07:26:39.215964 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6"] Oct 06 07:26:39 crc kubenswrapper[4769]: I1006 07:26:39.918846 4769 generic.go:334] "Generic (PLEG): container finished" podID="38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" containerID="f454b8df19483d089e3f55f6c27c8d2b615fa58fdc72f07d76e220c015403f65" exitCode=0 Oct 06 07:26:39 crc kubenswrapper[4769]: I1006 07:26:39.918911 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" event={"ID":"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c","Type":"ContainerDied","Data":"f454b8df19483d089e3f55f6c27c8d2b615fa58fdc72f07d76e220c015403f65"} Oct 06 07:26:39 crc kubenswrapper[4769]: I1006 07:26:39.918951 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" event={"ID":"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c","Type":"ContainerStarted","Data":"f40e35839cc73d52e6e596b60c1db09ef2fab403b3ea571ac510d6f866e79de1"} Oct 06 07:26:40 crc kubenswrapper[4769]: I1006 07:26:40.917040 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-lsg5p" podUID="7e16e210-5266-45ae-9f3d-c214c5c173a4" containerName="console" containerID="cri-o://8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a" gracePeriod=15 Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.364732 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lsg5p_7e16e210-5266-45ae-9f3d-c214c5c173a4/console/0.log" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.365140 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.492718 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-trusted-ca-bundle\") pod \"7e16e210-5266-45ae-9f3d-c214c5c173a4\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.492773 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4fwm\" (UniqueName: \"kubernetes.io/projected/7e16e210-5266-45ae-9f3d-c214c5c173a4-kube-api-access-z4fwm\") pod \"7e16e210-5266-45ae-9f3d-c214c5c173a4\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.492804 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-serving-cert\") pod \"7e16e210-5266-45ae-9f3d-c214c5c173a4\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.492832 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-oauth-serving-cert\") pod \"7e16e210-5266-45ae-9f3d-c214c5c173a4\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.492860 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-service-ca\") pod \"7e16e210-5266-45ae-9f3d-c214c5c173a4\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.492882 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-config\") pod \"7e16e210-5266-45ae-9f3d-c214c5c173a4\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.492907 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-oauth-config\") pod \"7e16e210-5266-45ae-9f3d-c214c5c173a4\" (UID: \"7e16e210-5266-45ae-9f3d-c214c5c173a4\") " Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.493806 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-service-ca" (OuterVolumeSpecName: "service-ca") pod "7e16e210-5266-45ae-9f3d-c214c5c173a4" (UID: "7e16e210-5266-45ae-9f3d-c214c5c173a4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.494034 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7e16e210-5266-45ae-9f3d-c214c5c173a4" (UID: "7e16e210-5266-45ae-9f3d-c214c5c173a4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.494065 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-config" (OuterVolumeSpecName: "console-config") pod "7e16e210-5266-45ae-9f3d-c214c5c173a4" (UID: "7e16e210-5266-45ae-9f3d-c214c5c173a4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.494101 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7e16e210-5266-45ae-9f3d-c214c5c173a4" (UID: "7e16e210-5266-45ae-9f3d-c214c5c173a4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.500551 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7e16e210-5266-45ae-9f3d-c214c5c173a4" (UID: "7e16e210-5266-45ae-9f3d-c214c5c173a4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.501164 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7e16e210-5266-45ae-9f3d-c214c5c173a4" (UID: "7e16e210-5266-45ae-9f3d-c214c5c173a4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.503415 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e16e210-5266-45ae-9f3d-c214c5c173a4-kube-api-access-z4fwm" (OuterVolumeSpecName: "kube-api-access-z4fwm") pod "7e16e210-5266-45ae-9f3d-c214c5c173a4" (UID: "7e16e210-5266-45ae-9f3d-c214c5c173a4"). InnerVolumeSpecName "kube-api-access-z4fwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.594330 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4fwm\" (UniqueName: \"kubernetes.io/projected/7e16e210-5266-45ae-9f3d-c214c5c173a4-kube-api-access-z4fwm\") on node \"crc\" DevicePath \"\"" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.594384 4769 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.594403 4769 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.594445 4769 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.594463 4769 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7e16e210-5266-45ae-9f3d-c214c5c173a4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.594483 4769 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-service-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.594501 4769 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e16e210-5266-45ae-9f3d-c214c5c173a4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.935077 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-lsg5p_7e16e210-5266-45ae-9f3d-c214c5c173a4/console/0.log" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.935163 4769 generic.go:334] "Generic (PLEG): container finished" podID="7e16e210-5266-45ae-9f3d-c214c5c173a4" containerID="8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a" exitCode=2 Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.935271 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-lsg5p" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.935356 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lsg5p" event={"ID":"7e16e210-5266-45ae-9f3d-c214c5c173a4","Type":"ContainerDied","Data":"8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a"} Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.935548 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-lsg5p" event={"ID":"7e16e210-5266-45ae-9f3d-c214c5c173a4","Type":"ContainerDied","Data":"7a78fc41b73a3e9d8ee2cd6e75203f8ab7a482dc9951c1182e37e9fee757f831"} Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.935606 4769 scope.go:117] "RemoveContainer" containerID="8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.938285 4769 generic.go:334] "Generic (PLEG): container finished" podID="38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" containerID="7f2ebda88a63f713bf91eae0514b2d4d6bcb68c44d7cc82fe5d30abfb629b481" exitCode=0 Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.938357 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" event={"ID":"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c","Type":"ContainerDied","Data":"7f2ebda88a63f713bf91eae0514b2d4d6bcb68c44d7cc82fe5d30abfb629b481"} Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.971555 4769 scope.go:117] "RemoveContainer" containerID="8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a" Oct 06 07:26:41 crc kubenswrapper[4769]: E1006 07:26:41.972311 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a\": container with ID starting with 8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a not found: ID does not exist" containerID="8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a" Oct 06 07:26:41 crc kubenswrapper[4769]: I1006 07:26:41.972491 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a"} err="failed to get container status \"8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a\": rpc error: code = NotFound desc = could not find container \"8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a\": container with ID starting with 8a5b7fd2a6523670d806d46cfe19cd1ec9a9c2a31f1074fb7752f3bf6fcc003a not found: ID does not exist" Oct 06 07:26:42 crc kubenswrapper[4769]: I1006 07:26:42.005350 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-lsg5p"] Oct 06 07:26:42 crc kubenswrapper[4769]: I1006 07:26:42.011710 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-lsg5p"] Oct 06 07:26:42 crc kubenswrapper[4769]: I1006 07:26:42.182169 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e16e210-5266-45ae-9f3d-c214c5c173a4" path="/var/lib/kubelet/pods/7e16e210-5266-45ae-9f3d-c214c5c173a4/volumes" Oct 06 07:26:42 crc kubenswrapper[4769]: I1006 07:26:42.955623 4769 generic.go:334] "Generic (PLEG): container finished" podID="38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" containerID="9bd2a48419c9d1015b5b9727931b3ee4de303510bd286895c379a98127f01906" exitCode=0 Oct 06 07:26:42 crc kubenswrapper[4769]: I1006 07:26:42.956087 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" event={"ID":"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c","Type":"ContainerDied","Data":"9bd2a48419c9d1015b5b9727931b3ee4de303510bd286895c379a98127f01906"} Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.222762 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.333642 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b8b4\" (UniqueName: \"kubernetes.io/projected/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-kube-api-access-6b8b4\") pod \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\" (UID: \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\") " Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.333818 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-util\") pod \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\" (UID: \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\") " Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.333861 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-bundle\") pod \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\" (UID: \"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c\") " Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.335138 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-bundle" (OuterVolumeSpecName: "bundle") pod "38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" (UID: "38973f50-be18-4cb1-a8bd-cb5d2eb5b22c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.338329 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-kube-api-access-6b8b4" (OuterVolumeSpecName: "kube-api-access-6b8b4") pod "38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" (UID: "38973f50-be18-4cb1-a8bd-cb5d2eb5b22c"). InnerVolumeSpecName "kube-api-access-6b8b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.347010 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-util" (OuterVolumeSpecName: "util") pod "38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" (UID: "38973f50-be18-4cb1-a8bd-cb5d2eb5b22c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.434910 4769 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-util\") on node \"crc\" DevicePath \"\"" Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.435132 4769 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.435211 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b8b4\" (UniqueName: \"kubernetes.io/projected/38973f50-be18-4cb1-a8bd-cb5d2eb5b22c-kube-api-access-6b8b4\") on node \"crc\" DevicePath \"\"" Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.444338 4769 scope.go:117] "RemoveContainer" containerID="d2d1d5c1872e49b29c172c5ae1ec4ddf9dd2d4961720ddc962aec74715befd55" Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.466147 4769 scope.go:117] "RemoveContainer" containerID="dab2ca22cf77d15a06bef0bd14cf1c4bc8ab17ce33b593814c008b887f2b1341" Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.974414 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" event={"ID":"38973f50-be18-4cb1-a8bd-cb5d2eb5b22c","Type":"ContainerDied","Data":"f40e35839cc73d52e6e596b60c1db09ef2fab403b3ea571ac510d6f866e79de1"} Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.974580 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f40e35839cc73d52e6e596b60c1db09ef2fab403b3ea571ac510d6f866e79de1" Oct 06 07:26:44 crc kubenswrapper[4769]: I1006 07:26:44.974715 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.810388 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg"] Oct 06 07:26:56 crc kubenswrapper[4769]: E1006 07:26:56.811077 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e16e210-5266-45ae-9f3d-c214c5c173a4" containerName="console" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.811089 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e16e210-5266-45ae-9f3d-c214c5c173a4" containerName="console" Oct 06 07:26:56 crc kubenswrapper[4769]: E1006 07:26:56.811101 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" containerName="util" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.811107 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" containerName="util" Oct 06 07:26:56 crc kubenswrapper[4769]: E1006 07:26:56.811116 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" containerName="pull" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.811122 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" containerName="pull" Oct 06 07:26:56 crc kubenswrapper[4769]: E1006 07:26:56.811131 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" containerName="extract" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.811136 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" containerName="extract" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.811222 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="38973f50-be18-4cb1-a8bd-cb5d2eb5b22c" containerName="extract" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.811231 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e16e210-5266-45ae-9f3d-c214c5c173a4" containerName="console" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.811600 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.814343 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-7plc8" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.815395 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.815486 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.815883 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.818569 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.828063 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg"] Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.999252 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr55k\" (UniqueName: \"kubernetes.io/projected/45aedf89-f0a0-4a64-839e-5dd2676b71ae-kube-api-access-wr55k\") pod \"metallb-operator-controller-manager-778bd9b8cd-kv2fg\" (UID: \"45aedf89-f0a0-4a64-839e-5dd2676b71ae\") " pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.999317 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45aedf89-f0a0-4a64-839e-5dd2676b71ae-webhook-cert\") pod \"metallb-operator-controller-manager-778bd9b8cd-kv2fg\" (UID: \"45aedf89-f0a0-4a64-839e-5dd2676b71ae\") " pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:26:56 crc kubenswrapper[4769]: I1006 07:26:56.999407 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45aedf89-f0a0-4a64-839e-5dd2676b71ae-apiservice-cert\") pod \"metallb-operator-controller-manager-778bd9b8cd-kv2fg\" (UID: \"45aedf89-f0a0-4a64-839e-5dd2676b71ae\") " pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.100977 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45aedf89-f0a0-4a64-839e-5dd2676b71ae-webhook-cert\") pod \"metallb-operator-controller-manager-778bd9b8cd-kv2fg\" (UID: \"45aedf89-f0a0-4a64-839e-5dd2676b71ae\") " pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.101049 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45aedf89-f0a0-4a64-839e-5dd2676b71ae-apiservice-cert\") pod \"metallb-operator-controller-manager-778bd9b8cd-kv2fg\" (UID: \"45aedf89-f0a0-4a64-839e-5dd2676b71ae\") " pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.101103 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr55k\" (UniqueName: \"kubernetes.io/projected/45aedf89-f0a0-4a64-839e-5dd2676b71ae-kube-api-access-wr55k\") pod \"metallb-operator-controller-manager-778bd9b8cd-kv2fg\" (UID: \"45aedf89-f0a0-4a64-839e-5dd2676b71ae\") " pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.107151 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/45aedf89-f0a0-4a64-839e-5dd2676b71ae-apiservice-cert\") pod \"metallb-operator-controller-manager-778bd9b8cd-kv2fg\" (UID: \"45aedf89-f0a0-4a64-839e-5dd2676b71ae\") " pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.117202 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/45aedf89-f0a0-4a64-839e-5dd2676b71ae-webhook-cert\") pod \"metallb-operator-controller-manager-778bd9b8cd-kv2fg\" (UID: \"45aedf89-f0a0-4a64-839e-5dd2676b71ae\") " pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.124088 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr55k\" (UniqueName: \"kubernetes.io/projected/45aedf89-f0a0-4a64-839e-5dd2676b71ae-kube-api-access-wr55k\") pod \"metallb-operator-controller-manager-778bd9b8cd-kv2fg\" (UID: \"45aedf89-f0a0-4a64-839e-5dd2676b71ae\") " pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.127138 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.156799 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b6689b594-655tz"] Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.157481 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.164747 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.165034 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-9c7kp" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.165709 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.221666 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b6689b594-655tz"] Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.303182 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk694\" (UniqueName: \"kubernetes.io/projected/9f122875-038b-4bae-ad57-07c899fc54ad-kube-api-access-vk694\") pod \"metallb-operator-webhook-server-6b6689b594-655tz\" (UID: \"9f122875-038b-4bae-ad57-07c899fc54ad\") " pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.303271 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f122875-038b-4bae-ad57-07c899fc54ad-apiservice-cert\") pod \"metallb-operator-webhook-server-6b6689b594-655tz\" (UID: \"9f122875-038b-4bae-ad57-07c899fc54ad\") " pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.303296 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f122875-038b-4bae-ad57-07c899fc54ad-webhook-cert\") pod \"metallb-operator-webhook-server-6b6689b594-655tz\" (UID: \"9f122875-038b-4bae-ad57-07c899fc54ad\") " pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.405052 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f122875-038b-4bae-ad57-07c899fc54ad-apiservice-cert\") pod \"metallb-operator-webhook-server-6b6689b594-655tz\" (UID: \"9f122875-038b-4bae-ad57-07c899fc54ad\") " pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.405101 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f122875-038b-4bae-ad57-07c899fc54ad-webhook-cert\") pod \"metallb-operator-webhook-server-6b6689b594-655tz\" (UID: \"9f122875-038b-4bae-ad57-07c899fc54ad\") " pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.405165 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk694\" (UniqueName: \"kubernetes.io/projected/9f122875-038b-4bae-ad57-07c899fc54ad-kube-api-access-vk694\") pod \"metallb-operator-webhook-server-6b6689b594-655tz\" (UID: \"9f122875-038b-4bae-ad57-07c899fc54ad\") " pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.416317 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f122875-038b-4bae-ad57-07c899fc54ad-webhook-cert\") pod \"metallb-operator-webhook-server-6b6689b594-655tz\" (UID: \"9f122875-038b-4bae-ad57-07c899fc54ad\") " pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.429992 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk694\" (UniqueName: \"kubernetes.io/projected/9f122875-038b-4bae-ad57-07c899fc54ad-kube-api-access-vk694\") pod \"metallb-operator-webhook-server-6b6689b594-655tz\" (UID: \"9f122875-038b-4bae-ad57-07c899fc54ad\") " pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.443071 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f122875-038b-4bae-ad57-07c899fc54ad-apiservice-cert\") pod \"metallb-operator-webhook-server-6b6689b594-655tz\" (UID: \"9f122875-038b-4bae-ad57-07c899fc54ad\") " pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.470838 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg"] Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.486827 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:26:57 crc kubenswrapper[4769]: I1006 07:26:57.895280 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b6689b594-655tz"] Oct 06 07:26:57 crc kubenswrapper[4769]: W1006 07:26:57.899645 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f122875_038b_4bae_ad57_07c899fc54ad.slice/crio-0b642c25081d70100968007dcfdd101107eca1ebaa8b2cf39660f0886efe364d WatchSource:0}: Error finding container 0b642c25081d70100968007dcfdd101107eca1ebaa8b2cf39660f0886efe364d: Status 404 returned error can't find the container with id 0b642c25081d70100968007dcfdd101107eca1ebaa8b2cf39660f0886efe364d Oct 06 07:26:58 crc kubenswrapper[4769]: I1006 07:26:58.040197 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" event={"ID":"9f122875-038b-4bae-ad57-07c899fc54ad","Type":"ContainerStarted","Data":"0b642c25081d70100968007dcfdd101107eca1ebaa8b2cf39660f0886efe364d"} Oct 06 07:26:58 crc kubenswrapper[4769]: I1006 07:26:58.041773 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" event={"ID":"45aedf89-f0a0-4a64-839e-5dd2676b71ae","Type":"ContainerStarted","Data":"7b2ea9ec4eb99d02c84c9735e96f3e13f55915e83e0af78dc86b5febc15edbcf"} Oct 06 07:27:02 crc kubenswrapper[4769]: I1006 07:27:02.070378 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" event={"ID":"9f122875-038b-4bae-ad57-07c899fc54ad","Type":"ContainerStarted","Data":"89d7eddea4385c7d29f88986afd257a3ea034e5996480f258b6f78b743953778"} Oct 06 07:27:02 crc kubenswrapper[4769]: I1006 07:27:02.071102 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:27:02 crc kubenswrapper[4769]: I1006 07:27:02.093048 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" podStartSLOduration=1.535109381 podStartE2EDuration="5.093028956s" podCreationTimestamp="2025-10-06 07:26:57 +0000 UTC" firstStartedPulling="2025-10-06 07:26:57.903145108 +0000 UTC m=+614.427426265" lastFinishedPulling="2025-10-06 07:27:01.461064693 +0000 UTC m=+617.985345840" observedRunningTime="2025-10-06 07:27:02.089844558 +0000 UTC m=+618.614125705" watchObservedRunningTime="2025-10-06 07:27:02.093028956 +0000 UTC m=+618.617310103" Oct 06 07:27:03 crc kubenswrapper[4769]: I1006 07:27:03.078930 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" event={"ID":"45aedf89-f0a0-4a64-839e-5dd2676b71ae","Type":"ContainerStarted","Data":"951a98ed229ceb182d6a5cbf76eaaabee0742e039f579f74e62688617eb24040"} Oct 06 07:27:03 crc kubenswrapper[4769]: I1006 07:27:03.102471 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" podStartSLOduration=1.731919915 podStartE2EDuration="7.102455415s" podCreationTimestamp="2025-10-06 07:26:56 +0000 UTC" firstStartedPulling="2025-10-06 07:26:57.483289119 +0000 UTC m=+614.007570266" lastFinishedPulling="2025-10-06 07:27:02.853824619 +0000 UTC m=+619.378105766" observedRunningTime="2025-10-06 07:27:03.099184755 +0000 UTC m=+619.623465972" watchObservedRunningTime="2025-10-06 07:27:03.102455415 +0000 UTC m=+619.626736562" Oct 06 07:27:04 crc kubenswrapper[4769]: I1006 07:27:04.084107 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:27:17 crc kubenswrapper[4769]: I1006 07:27:17.492856 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6b6689b594-655tz" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.130064 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-778bd9b8cd-kv2fg" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.853277 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96"] Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.853938 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.860107 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-mc84h" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.860178 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.861514 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5mfsm"] Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.863619 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.865240 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.865709 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.877664 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96"] Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.939092 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bf51374-74ce-4d2d-a19b-6fcc29e09d29-cert\") pod \"frr-k8s-webhook-server-64bf5d555-dzl96\" (UID: \"9bf51374-74ce-4d2d-a19b-6fcc29e09d29\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.939135 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nrk4\" (UniqueName: \"kubernetes.io/projected/c9cec005-4cae-4484-bfe3-03bed62e27b8-kube-api-access-8nrk4\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.939182 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c9cec005-4cae-4484-bfe3-03bed62e27b8-frr-conf\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.939199 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c9cec005-4cae-4484-bfe3-03bed62e27b8-frr-sockets\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.939224 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c9cec005-4cae-4484-bfe3-03bed62e27b8-metrics\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.939268 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9cec005-4cae-4484-bfe3-03bed62e27b8-metrics-certs\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.939344 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c9cec005-4cae-4484-bfe3-03bed62e27b8-reloader\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.939466 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5nqg\" (UniqueName: \"kubernetes.io/projected/9bf51374-74ce-4d2d-a19b-6fcc29e09d29-kube-api-access-s5nqg\") pod \"frr-k8s-webhook-server-64bf5d555-dzl96\" (UID: \"9bf51374-74ce-4d2d-a19b-6fcc29e09d29\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.939498 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c9cec005-4cae-4484-bfe3-03bed62e27b8-frr-startup\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.961737 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tqlvs"] Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.962709 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tqlvs" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.964472 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.964629 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.964833 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-z6dlh" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.965100 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.975760 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-9nkz5"] Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.976754 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:37 crc kubenswrapper[4769]: I1006 07:27:37.980148 4769 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.005226 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-9nkz5"] Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.050440 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-metrics-certs\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.050501 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d88f6701-fae7-4e41-b349-99fce99be6da-metallb-excludel2\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.050532 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-memberlist\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.050558 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7cfbeaee-4edb-49a6-a887-520cd4922ca1-metrics-certs\") pod \"controller-68d546b9d8-9nkz5\" (UID: \"7cfbeaee-4edb-49a6-a887-520cd4922ca1\") " pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.050645 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bf51374-74ce-4d2d-a19b-6fcc29e09d29-cert\") pod \"frr-k8s-webhook-server-64bf5d555-dzl96\" (UID: \"9bf51374-74ce-4d2d-a19b-6fcc29e09d29\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.050688 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nrk4\" (UniqueName: \"kubernetes.io/projected/c9cec005-4cae-4484-bfe3-03bed62e27b8-kube-api-access-8nrk4\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.050744 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c9cec005-4cae-4484-bfe3-03bed62e27b8-frr-conf\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.050765 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c9cec005-4cae-4484-bfe3-03bed62e27b8-frr-sockets\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: E1006 07:27:38.050787 4769 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Oct 06 07:27:38 crc kubenswrapper[4769]: E1006 07:27:38.050836 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bf51374-74ce-4d2d-a19b-6fcc29e09d29-cert podName:9bf51374-74ce-4d2d-a19b-6fcc29e09d29 nodeName:}" failed. No retries permitted until 2025-10-06 07:27:38.55081855 +0000 UTC m=+655.075099697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9bf51374-74ce-4d2d-a19b-6fcc29e09d29-cert") pod "frr-k8s-webhook-server-64bf5d555-dzl96" (UID: "9bf51374-74ce-4d2d-a19b-6fcc29e09d29") : secret "frr-k8s-webhook-server-cert" not found Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.050855 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c9cec005-4cae-4484-bfe3-03bed62e27b8-metrics\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.050913 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9cec005-4cae-4484-bfe3-03bed62e27b8-metrics-certs\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.050932 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c9cec005-4cae-4484-bfe3-03bed62e27b8-reloader\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.050966 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7cfbeaee-4edb-49a6-a887-520cd4922ca1-cert\") pod \"controller-68d546b9d8-9nkz5\" (UID: \"7cfbeaee-4edb-49a6-a887-520cd4922ca1\") " pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.051003 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcd6z\" (UniqueName: \"kubernetes.io/projected/d88f6701-fae7-4e41-b349-99fce99be6da-kube-api-access-wcd6z\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.051023 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5nqg\" (UniqueName: \"kubernetes.io/projected/9bf51374-74ce-4d2d-a19b-6fcc29e09d29-kube-api-access-s5nqg\") pod \"frr-k8s-webhook-server-64bf5d555-dzl96\" (UID: \"9bf51374-74ce-4d2d-a19b-6fcc29e09d29\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.051043 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqh5j\" (UniqueName: \"kubernetes.io/projected/7cfbeaee-4edb-49a6-a887-520cd4922ca1-kube-api-access-wqh5j\") pod \"controller-68d546b9d8-9nkz5\" (UID: \"7cfbeaee-4edb-49a6-a887-520cd4922ca1\") " pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.051068 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c9cec005-4cae-4484-bfe3-03bed62e27b8-frr-startup\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: E1006 07:27:38.051078 4769 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Oct 06 07:27:38 crc kubenswrapper[4769]: E1006 07:27:38.051159 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9cec005-4cae-4484-bfe3-03bed62e27b8-metrics-certs podName:c9cec005-4cae-4484-bfe3-03bed62e27b8 nodeName:}" failed. No retries permitted until 2025-10-06 07:27:38.551142689 +0000 UTC m=+655.075423836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c9cec005-4cae-4484-bfe3-03bed62e27b8-metrics-certs") pod "frr-k8s-5mfsm" (UID: "c9cec005-4cae-4484-bfe3-03bed62e27b8") : secret "frr-k8s-certs-secret" not found Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.051249 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c9cec005-4cae-4484-bfe3-03bed62e27b8-frr-sockets\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.051327 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c9cec005-4cae-4484-bfe3-03bed62e27b8-frr-conf\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.051487 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c9cec005-4cae-4484-bfe3-03bed62e27b8-metrics\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.051931 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c9cec005-4cae-4484-bfe3-03bed62e27b8-frr-startup\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.060086 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c9cec005-4cae-4484-bfe3-03bed62e27b8-reloader\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.072015 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5nqg\" (UniqueName: \"kubernetes.io/projected/9bf51374-74ce-4d2d-a19b-6fcc29e09d29-kube-api-access-s5nqg\") pod \"frr-k8s-webhook-server-64bf5d555-dzl96\" (UID: \"9bf51374-74ce-4d2d-a19b-6fcc29e09d29\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.079443 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nrk4\" (UniqueName: \"kubernetes.io/projected/c9cec005-4cae-4484-bfe3-03bed62e27b8-kube-api-access-8nrk4\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.151946 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7cfbeaee-4edb-49a6-a887-520cd4922ca1-cert\") pod \"controller-68d546b9d8-9nkz5\" (UID: \"7cfbeaee-4edb-49a6-a887-520cd4922ca1\") " pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.151999 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcd6z\" (UniqueName: \"kubernetes.io/projected/d88f6701-fae7-4e41-b349-99fce99be6da-kube-api-access-wcd6z\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.152018 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqh5j\" (UniqueName: \"kubernetes.io/projected/7cfbeaee-4edb-49a6-a887-520cd4922ca1-kube-api-access-wqh5j\") pod \"controller-68d546b9d8-9nkz5\" (UID: \"7cfbeaee-4edb-49a6-a887-520cd4922ca1\") " pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.152044 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-metrics-certs\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.152063 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d88f6701-fae7-4e41-b349-99fce99be6da-metallb-excludel2\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.152082 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-memberlist\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.152099 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7cfbeaee-4edb-49a6-a887-520cd4922ca1-metrics-certs\") pod \"controller-68d546b9d8-9nkz5\" (UID: \"7cfbeaee-4edb-49a6-a887-520cd4922ca1\") " pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:38 crc kubenswrapper[4769]: E1006 07:27:38.152330 4769 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Oct 06 07:27:38 crc kubenswrapper[4769]: E1006 07:27:38.152399 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-metrics-certs podName:d88f6701-fae7-4e41-b349-99fce99be6da nodeName:}" failed. No retries permitted until 2025-10-06 07:27:38.652381767 +0000 UTC m=+655.176662914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-metrics-certs") pod "speaker-tqlvs" (UID: "d88f6701-fae7-4e41-b349-99fce99be6da") : secret "speaker-certs-secret" not found Oct 06 07:27:38 crc kubenswrapper[4769]: E1006 07:27:38.152490 4769 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 06 07:27:38 crc kubenswrapper[4769]: E1006 07:27:38.152537 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-memberlist podName:d88f6701-fae7-4e41-b349-99fce99be6da nodeName:}" failed. No retries permitted until 2025-10-06 07:27:38.65251992 +0000 UTC m=+655.176801067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-memberlist") pod "speaker-tqlvs" (UID: "d88f6701-fae7-4e41-b349-99fce99be6da") : secret "metallb-memberlist" not found Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.153004 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d88f6701-fae7-4e41-b349-99fce99be6da-metallb-excludel2\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.155128 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7cfbeaee-4edb-49a6-a887-520cd4922ca1-metrics-certs\") pod \"controller-68d546b9d8-9nkz5\" (UID: \"7cfbeaee-4edb-49a6-a887-520cd4922ca1\") " pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.156742 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7cfbeaee-4edb-49a6-a887-520cd4922ca1-cert\") pod \"controller-68d546b9d8-9nkz5\" (UID: \"7cfbeaee-4edb-49a6-a887-520cd4922ca1\") " pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.178284 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcd6z\" (UniqueName: \"kubernetes.io/projected/d88f6701-fae7-4e41-b349-99fce99be6da-kube-api-access-wcd6z\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.179251 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqh5j\" (UniqueName: \"kubernetes.io/projected/7cfbeaee-4edb-49a6-a887-520cd4922ca1-kube-api-access-wqh5j\") pod \"controller-68d546b9d8-9nkz5\" (UID: \"7cfbeaee-4edb-49a6-a887-520cd4922ca1\") " pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.297099 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.557487 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bf51374-74ce-4d2d-a19b-6fcc29e09d29-cert\") pod \"frr-k8s-webhook-server-64bf5d555-dzl96\" (UID: \"9bf51374-74ce-4d2d-a19b-6fcc29e09d29\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.557780 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9cec005-4cae-4484-bfe3-03bed62e27b8-metrics-certs\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.562223 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9bf51374-74ce-4d2d-a19b-6fcc29e09d29-cert\") pod \"frr-k8s-webhook-server-64bf5d555-dzl96\" (UID: \"9bf51374-74ce-4d2d-a19b-6fcc29e09d29\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.562335 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c9cec005-4cae-4484-bfe3-03bed62e27b8-metrics-certs\") pod \"frr-k8s-5mfsm\" (UID: \"c9cec005-4cae-4484-bfe3-03bed62e27b8\") " pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.658763 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-metrics-certs\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.658830 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-memberlist\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: E1006 07:27:38.659009 4769 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 06 07:27:38 crc kubenswrapper[4769]: E1006 07:27:38.659074 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-memberlist podName:d88f6701-fae7-4e41-b349-99fce99be6da nodeName:}" failed. No retries permitted until 2025-10-06 07:27:39.659056554 +0000 UTC m=+656.183337701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-memberlist") pod "speaker-tqlvs" (UID: "d88f6701-fae7-4e41-b349-99fce99be6da") : secret "metallb-memberlist" not found Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.662677 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-metrics-certs\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.689618 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-9nkz5"] Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.782836 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" Oct 06 07:27:38 crc kubenswrapper[4769]: I1006 07:27:38.786132 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:39 crc kubenswrapper[4769]: I1006 07:27:39.244396 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96"] Oct 06 07:27:39 crc kubenswrapper[4769]: W1006 07:27:39.249660 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bf51374_74ce_4d2d_a19b_6fcc29e09d29.slice/crio-50c5a714049b99e8e2c6e08f587d1e0e59d45720cc6574d8e2539dc834a29d41 WatchSource:0}: Error finding container 50c5a714049b99e8e2c6e08f587d1e0e59d45720cc6574d8e2539dc834a29d41: Status 404 returned error can't find the container with id 50c5a714049b99e8e2c6e08f587d1e0e59d45720cc6574d8e2539dc834a29d41 Oct 06 07:27:39 crc kubenswrapper[4769]: I1006 07:27:39.279403 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" event={"ID":"9bf51374-74ce-4d2d-a19b-6fcc29e09d29","Type":"ContainerStarted","Data":"50c5a714049b99e8e2c6e08f587d1e0e59d45720cc6574d8e2539dc834a29d41"} Oct 06 07:27:39 crc kubenswrapper[4769]: I1006 07:27:39.280415 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5mfsm" event={"ID":"c9cec005-4cae-4484-bfe3-03bed62e27b8","Type":"ContainerStarted","Data":"88d8192b91aa63bfb17d16a41c7be1ec209488b927abf8de96eeb4050b6ec340"} Oct 06 07:27:39 crc kubenswrapper[4769]: I1006 07:27:39.282266 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-9nkz5" event={"ID":"7cfbeaee-4edb-49a6-a887-520cd4922ca1","Type":"ContainerStarted","Data":"511c16d7df3e5c94e4611046a631fa2897feb58f4807137a85bffd5e588fa783"} Oct 06 07:27:39 crc kubenswrapper[4769]: I1006 07:27:39.282293 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-9nkz5" event={"ID":"7cfbeaee-4edb-49a6-a887-520cd4922ca1","Type":"ContainerStarted","Data":"ab2f9b039aee70688271e5d9728e3d6077b38ca474eba9b3166c248564bf6f98"} Oct 06 07:27:39 crc kubenswrapper[4769]: I1006 07:27:39.282310 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-9nkz5" event={"ID":"7cfbeaee-4edb-49a6-a887-520cd4922ca1","Type":"ContainerStarted","Data":"aeb71dba94fc9bd127316427d1963ebd3dfe08784fa4dd4285a3d675efcd6d8f"} Oct 06 07:27:39 crc kubenswrapper[4769]: I1006 07:27:39.282405 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:39 crc kubenswrapper[4769]: I1006 07:27:39.306998 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-9nkz5" podStartSLOduration=2.306976233 podStartE2EDuration="2.306976233s" podCreationTimestamp="2025-10-06 07:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:27:39.302456369 +0000 UTC m=+655.826737566" watchObservedRunningTime="2025-10-06 07:27:39.306976233 +0000 UTC m=+655.831257380" Oct 06 07:27:39 crc kubenswrapper[4769]: I1006 07:27:39.677341 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-memberlist\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:39 crc kubenswrapper[4769]: I1006 07:27:39.688416 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d88f6701-fae7-4e41-b349-99fce99be6da-memberlist\") pod \"speaker-tqlvs\" (UID: \"d88f6701-fae7-4e41-b349-99fce99be6da\") " pod="metallb-system/speaker-tqlvs" Oct 06 07:27:39 crc kubenswrapper[4769]: I1006 07:27:39.786686 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tqlvs" Oct 06 07:27:39 crc kubenswrapper[4769]: W1006 07:27:39.807942 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd88f6701_fae7_4e41_b349_99fce99be6da.slice/crio-75d4b28f3ca7d9141af9eeb35ff663a55b5caa7070cd756b342866ead145678e WatchSource:0}: Error finding container 75d4b28f3ca7d9141af9eeb35ff663a55b5caa7070cd756b342866ead145678e: Status 404 returned error can't find the container with id 75d4b28f3ca7d9141af9eeb35ff663a55b5caa7070cd756b342866ead145678e Oct 06 07:27:40 crc kubenswrapper[4769]: I1006 07:27:40.303508 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tqlvs" event={"ID":"d88f6701-fae7-4e41-b349-99fce99be6da","Type":"ContainerStarted","Data":"a2b6a5e4d5012df8a4ffdb0935df8dcfa699b6b5c4e8825ecf9884be5ea15971"} Oct 06 07:27:40 crc kubenswrapper[4769]: I1006 07:27:40.303546 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tqlvs" event={"ID":"d88f6701-fae7-4e41-b349-99fce99be6da","Type":"ContainerStarted","Data":"75d4b28f3ca7d9141af9eeb35ff663a55b5caa7070cd756b342866ead145678e"} Oct 06 07:27:41 crc kubenswrapper[4769]: I1006 07:27:41.313686 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tqlvs" event={"ID":"d88f6701-fae7-4e41-b349-99fce99be6da","Type":"ContainerStarted","Data":"22d52b3e178623e70f44e0c8a552038047d0438b68f3a8c2f80510e087ed23cb"} Oct 06 07:27:41 crc kubenswrapper[4769]: I1006 07:27:41.314120 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tqlvs" Oct 06 07:27:41 crc kubenswrapper[4769]: I1006 07:27:41.341655 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tqlvs" podStartSLOduration=4.341636473 podStartE2EDuration="4.341636473s" podCreationTimestamp="2025-10-06 07:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:27:41.339977407 +0000 UTC m=+657.864258564" watchObservedRunningTime="2025-10-06 07:27:41.341636473 +0000 UTC m=+657.865917610" Oct 06 07:27:46 crc kubenswrapper[4769]: I1006 07:27:46.343390 4769 generic.go:334] "Generic (PLEG): container finished" podID="c9cec005-4cae-4484-bfe3-03bed62e27b8" containerID="36904719c7881faaf18443a2d5d13f771f3c32030109d359122deb33efa67e1c" exitCode=0 Oct 06 07:27:46 crc kubenswrapper[4769]: I1006 07:27:46.343463 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5mfsm" event={"ID":"c9cec005-4cae-4484-bfe3-03bed62e27b8","Type":"ContainerDied","Data":"36904719c7881faaf18443a2d5d13f771f3c32030109d359122deb33efa67e1c"} Oct 06 07:27:46 crc kubenswrapper[4769]: I1006 07:27:46.345882 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" event={"ID":"9bf51374-74ce-4d2d-a19b-6fcc29e09d29","Type":"ContainerStarted","Data":"47abd9e4f0eaa4fbbcd758d6932aeadd1a4afb9bbdbf9ac7446f509890b33df0"} Oct 06 07:27:46 crc kubenswrapper[4769]: I1006 07:27:46.346007 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" Oct 06 07:27:46 crc kubenswrapper[4769]: I1006 07:27:46.394480 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" podStartSLOduration=2.9598197859999997 podStartE2EDuration="9.394412863s" podCreationTimestamp="2025-10-06 07:27:37 +0000 UTC" firstStartedPulling="2025-10-06 07:27:39.259749382 +0000 UTC m=+655.784030529" lastFinishedPulling="2025-10-06 07:27:45.694342459 +0000 UTC m=+662.218623606" observedRunningTime="2025-10-06 07:27:46.387937266 +0000 UTC m=+662.912218423" watchObservedRunningTime="2025-10-06 07:27:46.394412863 +0000 UTC m=+662.918694010" Oct 06 07:27:47 crc kubenswrapper[4769]: I1006 07:27:47.353641 4769 generic.go:334] "Generic (PLEG): container finished" podID="c9cec005-4cae-4484-bfe3-03bed62e27b8" containerID="0ec15c6a1608ac7a655a6fc588d27816ae138ecb792807465ac3607cad1ea663" exitCode=0 Oct 06 07:27:47 crc kubenswrapper[4769]: I1006 07:27:47.353718 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5mfsm" event={"ID":"c9cec005-4cae-4484-bfe3-03bed62e27b8","Type":"ContainerDied","Data":"0ec15c6a1608ac7a655a6fc588d27816ae138ecb792807465ac3607cad1ea663"} Oct 06 07:27:48 crc kubenswrapper[4769]: I1006 07:27:48.300252 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-9nkz5" Oct 06 07:27:48 crc kubenswrapper[4769]: I1006 07:27:48.360796 4769 generic.go:334] "Generic (PLEG): container finished" podID="c9cec005-4cae-4484-bfe3-03bed62e27b8" containerID="52df036f589ee5dcbf47d7582c841806af0838198922b8d1c72d1466f90e9fb7" exitCode=0 Oct 06 07:27:48 crc kubenswrapper[4769]: I1006 07:27:48.360842 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5mfsm" event={"ID":"c9cec005-4cae-4484-bfe3-03bed62e27b8","Type":"ContainerDied","Data":"52df036f589ee5dcbf47d7582c841806af0838198922b8d1c72d1466f90e9fb7"} Oct 06 07:27:49 crc kubenswrapper[4769]: I1006 07:27:49.373522 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5mfsm" event={"ID":"c9cec005-4cae-4484-bfe3-03bed62e27b8","Type":"ContainerStarted","Data":"97303539c797b0df447f85a129f1b7a0bd2b7e90ef4a05f312c18d3dbda7debf"} Oct 06 07:27:49 crc kubenswrapper[4769]: I1006 07:27:49.373562 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5mfsm" event={"ID":"c9cec005-4cae-4484-bfe3-03bed62e27b8","Type":"ContainerStarted","Data":"b2bb42e36e9e8b5d16f4caea10ac2d8b3e9149809c82d8f43fb84c7f6649ea0c"} Oct 06 07:27:49 crc kubenswrapper[4769]: I1006 07:27:49.373572 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5mfsm" event={"ID":"c9cec005-4cae-4484-bfe3-03bed62e27b8","Type":"ContainerStarted","Data":"be622186ed963bb00748e7037a29753ba12b89503bc5efe5559cd3ae680b3c3e"} Oct 06 07:27:49 crc kubenswrapper[4769]: I1006 07:27:49.373580 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5mfsm" event={"ID":"c9cec005-4cae-4484-bfe3-03bed62e27b8","Type":"ContainerStarted","Data":"f1b472e1c44a632870b408d5726a979e12bd0b9211cb490ccc01a7dbeda9aa1e"} Oct 06 07:27:49 crc kubenswrapper[4769]: I1006 07:27:49.373588 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5mfsm" event={"ID":"c9cec005-4cae-4484-bfe3-03bed62e27b8","Type":"ContainerStarted","Data":"db58a6dfeb6a70a06e54b3f945e24489727ae471b952689d7268a3803abd740f"} Oct 06 07:27:49 crc kubenswrapper[4769]: I1006 07:27:49.373596 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5mfsm" event={"ID":"c9cec005-4cae-4484-bfe3-03bed62e27b8","Type":"ContainerStarted","Data":"9a3a1c3da677fab61a3592c1873fd0fd65fa3fdc3e0021f958dd21123ca2f939"} Oct 06 07:27:49 crc kubenswrapper[4769]: I1006 07:27:49.374442 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:49 crc kubenswrapper[4769]: I1006 07:27:49.400948 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5mfsm" podStartSLOduration=5.627041214 podStartE2EDuration="12.400933115s" podCreationTimestamp="2025-10-06 07:27:37 +0000 UTC" firstStartedPulling="2025-10-06 07:27:38.910604299 +0000 UTC m=+655.434885446" lastFinishedPulling="2025-10-06 07:27:45.6844962 +0000 UTC m=+662.208777347" observedRunningTime="2025-10-06 07:27:49.400155094 +0000 UTC m=+665.924436261" watchObservedRunningTime="2025-10-06 07:27:49.400933115 +0000 UTC m=+665.925214262" Oct 06 07:27:53 crc kubenswrapper[4769]: I1006 07:27:53.786688 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:53 crc kubenswrapper[4769]: I1006 07:27:53.821768 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:58 crc kubenswrapper[4769]: I1006 07:27:58.791830 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5mfsm" Oct 06 07:27:58 crc kubenswrapper[4769]: I1006 07:27:58.792400 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-dzl96" Oct 06 07:27:59 crc kubenswrapper[4769]: I1006 07:27:59.790331 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tqlvs" Oct 06 07:28:02 crc kubenswrapper[4769]: I1006 07:28:02.548097 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vkzrg"] Oct 06 07:28:02 crc kubenswrapper[4769]: I1006 07:28:02.549359 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vkzrg" Oct 06 07:28:02 crc kubenswrapper[4769]: I1006 07:28:02.551276 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 06 07:28:02 crc kubenswrapper[4769]: I1006 07:28:02.553894 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-4bpfk" Oct 06 07:28:02 crc kubenswrapper[4769]: I1006 07:28:02.556282 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 06 07:28:02 crc kubenswrapper[4769]: I1006 07:28:02.561256 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vkzrg"] Oct 06 07:28:02 crc kubenswrapper[4769]: I1006 07:28:02.567702 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gs5g\" (UniqueName: \"kubernetes.io/projected/ba55e1cb-d03a-40b8-8f68-85e0735a9177-kube-api-access-9gs5g\") pod \"openstack-operator-index-vkzrg\" (UID: \"ba55e1cb-d03a-40b8-8f68-85e0735a9177\") " pod="openstack-operators/openstack-operator-index-vkzrg" Oct 06 07:28:02 crc kubenswrapper[4769]: I1006 07:28:02.681052 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gs5g\" (UniqueName: \"kubernetes.io/projected/ba55e1cb-d03a-40b8-8f68-85e0735a9177-kube-api-access-9gs5g\") pod \"openstack-operator-index-vkzrg\" (UID: \"ba55e1cb-d03a-40b8-8f68-85e0735a9177\") " pod="openstack-operators/openstack-operator-index-vkzrg" Oct 06 07:28:02 crc kubenswrapper[4769]: I1006 07:28:02.703670 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gs5g\" (UniqueName: \"kubernetes.io/projected/ba55e1cb-d03a-40b8-8f68-85e0735a9177-kube-api-access-9gs5g\") pod \"openstack-operator-index-vkzrg\" (UID: \"ba55e1cb-d03a-40b8-8f68-85e0735a9177\") " pod="openstack-operators/openstack-operator-index-vkzrg" Oct 06 07:28:02 crc kubenswrapper[4769]: I1006 07:28:02.875512 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vkzrg" Oct 06 07:28:03 crc kubenswrapper[4769]: I1006 07:28:03.304843 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vkzrg"] Oct 06 07:28:03 crc kubenswrapper[4769]: I1006 07:28:03.447324 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vkzrg" event={"ID":"ba55e1cb-d03a-40b8-8f68-85e0735a9177","Type":"ContainerStarted","Data":"abaa75ba20ed1a866eaf1eaee32c4b4855d605137b9c75b73dbe756aa5c4971d"} Oct 06 07:28:04 crc kubenswrapper[4769]: I1006 07:28:04.454258 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vkzrg" event={"ID":"ba55e1cb-d03a-40b8-8f68-85e0735a9177","Type":"ContainerStarted","Data":"8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d"} Oct 06 07:28:04 crc kubenswrapper[4769]: I1006 07:28:04.477127 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vkzrg" podStartSLOduration=1.683101583 podStartE2EDuration="2.477113494s" podCreationTimestamp="2025-10-06 07:28:02 +0000 UTC" firstStartedPulling="2025-10-06 07:28:03.317978793 +0000 UTC m=+679.842259930" lastFinishedPulling="2025-10-06 07:28:04.111990684 +0000 UTC m=+680.636271841" observedRunningTime="2025-10-06 07:28:04.476387813 +0000 UTC m=+681.000668980" watchObservedRunningTime="2025-10-06 07:28:04.477113494 +0000 UTC m=+681.001394641" Oct 06 07:28:05 crc kubenswrapper[4769]: I1006 07:28:05.924326 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vkzrg"] Oct 06 07:28:06 crc kubenswrapper[4769]: I1006 07:28:06.466178 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vkzrg" podUID="ba55e1cb-d03a-40b8-8f68-85e0735a9177" containerName="registry-server" containerID="cri-o://8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d" gracePeriod=2 Oct 06 07:28:06 crc kubenswrapper[4769]: I1006 07:28:06.555117 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fqtb5"] Oct 06 07:28:06 crc kubenswrapper[4769]: I1006 07:28:06.556659 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fqtb5" Oct 06 07:28:06 crc kubenswrapper[4769]: I1006 07:28:06.568198 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fqtb5"] Oct 06 07:28:06 crc kubenswrapper[4769]: I1006 07:28:06.749472 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh94n\" (UniqueName: \"kubernetes.io/projected/118688b6-b9e2-4de4-8c39-6bd6c68d158e-kube-api-access-wh94n\") pod \"openstack-operator-index-fqtb5\" (UID: \"118688b6-b9e2-4de4-8c39-6bd6c68d158e\") " pod="openstack-operators/openstack-operator-index-fqtb5" Oct 06 07:28:06 crc kubenswrapper[4769]: I1006 07:28:06.851310 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh94n\" (UniqueName: \"kubernetes.io/projected/118688b6-b9e2-4de4-8c39-6bd6c68d158e-kube-api-access-wh94n\") pod \"openstack-operator-index-fqtb5\" (UID: \"118688b6-b9e2-4de4-8c39-6bd6c68d158e\") " pod="openstack-operators/openstack-operator-index-fqtb5" Oct 06 07:28:06 crc kubenswrapper[4769]: I1006 07:28:06.874410 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh94n\" (UniqueName: \"kubernetes.io/projected/118688b6-b9e2-4de4-8c39-6bd6c68d158e-kube-api-access-wh94n\") pod \"openstack-operator-index-fqtb5\" (UID: \"118688b6-b9e2-4de4-8c39-6bd6c68d158e\") " pod="openstack-operators/openstack-operator-index-fqtb5" Oct 06 07:28:06 crc kubenswrapper[4769]: I1006 07:28:06.901350 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fqtb5" Oct 06 07:28:06 crc kubenswrapper[4769]: I1006 07:28:06.906760 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vkzrg" Oct 06 07:28:06 crc kubenswrapper[4769]: I1006 07:28:06.980599 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gs5g\" (UniqueName: \"kubernetes.io/projected/ba55e1cb-d03a-40b8-8f68-85e0735a9177-kube-api-access-9gs5g\") pod \"ba55e1cb-d03a-40b8-8f68-85e0735a9177\" (UID: \"ba55e1cb-d03a-40b8-8f68-85e0735a9177\") " Oct 06 07:28:06 crc kubenswrapper[4769]: I1006 07:28:06.988760 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba55e1cb-d03a-40b8-8f68-85e0735a9177-kube-api-access-9gs5g" (OuterVolumeSpecName: "kube-api-access-9gs5g") pod "ba55e1cb-d03a-40b8-8f68-85e0735a9177" (UID: "ba55e1cb-d03a-40b8-8f68-85e0735a9177"). InnerVolumeSpecName "kube-api-access-9gs5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:28:07 crc kubenswrapper[4769]: I1006 07:28:07.081967 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gs5g\" (UniqueName: \"kubernetes.io/projected/ba55e1cb-d03a-40b8-8f68-85e0735a9177-kube-api-access-9gs5g\") on node \"crc\" DevicePath \"\"" Oct 06 07:28:07 crc kubenswrapper[4769]: I1006 07:28:07.329972 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fqtb5"] Oct 06 07:28:07 crc kubenswrapper[4769]: W1006 07:28:07.344393 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod118688b6_b9e2_4de4_8c39_6bd6c68d158e.slice/crio-07ca9801904add9bddf9748bccb58e221c9386a856f38c5aecb0d195749b4141 WatchSource:0}: Error finding container 07ca9801904add9bddf9748bccb58e221c9386a856f38c5aecb0d195749b4141: Status 404 returned error can't find the container with id 07ca9801904add9bddf9748bccb58e221c9386a856f38c5aecb0d195749b4141 Oct 06 07:28:07 crc kubenswrapper[4769]: I1006 07:28:07.474768 4769 generic.go:334] "Generic (PLEG): container finished" podID="ba55e1cb-d03a-40b8-8f68-85e0735a9177" containerID="8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d" exitCode=0 Oct 06 07:28:07 crc kubenswrapper[4769]: I1006 07:28:07.474840 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vkzrg" event={"ID":"ba55e1cb-d03a-40b8-8f68-85e0735a9177","Type":"ContainerDied","Data":"8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d"} Oct 06 07:28:07 crc kubenswrapper[4769]: I1006 07:28:07.474895 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vkzrg" Oct 06 07:28:07 crc kubenswrapper[4769]: I1006 07:28:07.475257 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vkzrg" event={"ID":"ba55e1cb-d03a-40b8-8f68-85e0735a9177","Type":"ContainerDied","Data":"abaa75ba20ed1a866eaf1eaee32c4b4855d605137b9c75b73dbe756aa5c4971d"} Oct 06 07:28:07 crc kubenswrapper[4769]: I1006 07:28:07.475299 4769 scope.go:117] "RemoveContainer" containerID="8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d" Oct 06 07:28:07 crc kubenswrapper[4769]: I1006 07:28:07.477196 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fqtb5" event={"ID":"118688b6-b9e2-4de4-8c39-6bd6c68d158e","Type":"ContainerStarted","Data":"07ca9801904add9bddf9748bccb58e221c9386a856f38c5aecb0d195749b4141"} Oct 06 07:28:07 crc kubenswrapper[4769]: I1006 07:28:07.493367 4769 scope.go:117] "RemoveContainer" containerID="8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d" Oct 06 07:28:07 crc kubenswrapper[4769]: E1006 07:28:07.496780 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d\": container with ID starting with 8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d not found: ID does not exist" containerID="8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d" Oct 06 07:28:07 crc kubenswrapper[4769]: I1006 07:28:07.496869 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d"} err="failed to get container status \"8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d\": rpc error: code = NotFound desc = could not find container \"8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d\": container with ID starting with 8973381fa09a017930befcfbdef593f65bd559a734d16f2c7f035f5c1ad4939d not found: ID does not exist" Oct 06 07:28:07 crc kubenswrapper[4769]: I1006 07:28:07.531998 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vkzrg"] Oct 06 07:28:07 crc kubenswrapper[4769]: I1006 07:28:07.536165 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vkzrg"] Oct 06 07:28:08 crc kubenswrapper[4769]: I1006 07:28:08.174926 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba55e1cb-d03a-40b8-8f68-85e0735a9177" path="/var/lib/kubelet/pods/ba55e1cb-d03a-40b8-8f68-85e0735a9177/volumes" Oct 06 07:28:09 crc kubenswrapper[4769]: I1006 07:28:09.492142 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fqtb5" event={"ID":"118688b6-b9e2-4de4-8c39-6bd6c68d158e","Type":"ContainerStarted","Data":"dab6c7feb2a57435645cd015c0e32564049147fd774bcc8302273f5ec21c1287"} Oct 06 07:28:09 crc kubenswrapper[4769]: I1006 07:28:09.513192 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fqtb5" podStartSLOduration=2.7006769999999998 podStartE2EDuration="3.513171646s" podCreationTimestamp="2025-10-06 07:28:06 +0000 UTC" firstStartedPulling="2025-10-06 07:28:07.348549695 +0000 UTC m=+683.872830842" lastFinishedPulling="2025-10-06 07:28:08.161044341 +0000 UTC m=+684.685325488" observedRunningTime="2025-10-06 07:28:09.510121843 +0000 UTC m=+686.034403010" watchObservedRunningTime="2025-10-06 07:28:09.513171646 +0000 UTC m=+686.037452803" Oct 06 07:28:16 crc kubenswrapper[4769]: I1006 07:28:16.901902 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-fqtb5" Oct 06 07:28:16 crc kubenswrapper[4769]: I1006 07:28:16.902624 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-fqtb5" Oct 06 07:28:16 crc kubenswrapper[4769]: I1006 07:28:16.944034 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-fqtb5" Oct 06 07:28:17 crc kubenswrapper[4769]: I1006 07:28:17.577955 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-fqtb5" Oct 06 07:28:22 crc kubenswrapper[4769]: I1006 07:28:22.245676 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:28:22 crc kubenswrapper[4769]: I1006 07:28:22.245952 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.545108 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7"] Oct 06 07:28:23 crc kubenswrapper[4769]: E1006 07:28:23.545611 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba55e1cb-d03a-40b8-8f68-85e0735a9177" containerName="registry-server" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.545624 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba55e1cb-d03a-40b8-8f68-85e0735a9177" containerName="registry-server" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.545731 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba55e1cb-d03a-40b8-8f68-85e0735a9177" containerName="registry-server" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.546477 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.548044 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-sbxq9" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.553246 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7"] Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.706966 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlc56\" (UniqueName: \"kubernetes.io/projected/e3246024-d919-4ff6-abf7-2779f6622954-kube-api-access-tlc56\") pod \"0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7\" (UID: \"e3246024-d919-4ff6-abf7-2779f6622954\") " pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.707094 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3246024-d919-4ff6-abf7-2779f6622954-util\") pod \"0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7\" (UID: \"e3246024-d919-4ff6-abf7-2779f6622954\") " pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.707125 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3246024-d919-4ff6-abf7-2779f6622954-bundle\") pod \"0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7\" (UID: \"e3246024-d919-4ff6-abf7-2779f6622954\") " pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.808382 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3246024-d919-4ff6-abf7-2779f6622954-util\") pod \"0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7\" (UID: \"e3246024-d919-4ff6-abf7-2779f6622954\") " pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.808496 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3246024-d919-4ff6-abf7-2779f6622954-bundle\") pod \"0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7\" (UID: \"e3246024-d919-4ff6-abf7-2779f6622954\") " pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.808588 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlc56\" (UniqueName: \"kubernetes.io/projected/e3246024-d919-4ff6-abf7-2779f6622954-kube-api-access-tlc56\") pod \"0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7\" (UID: \"e3246024-d919-4ff6-abf7-2779f6622954\") " pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.809068 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3246024-d919-4ff6-abf7-2779f6622954-bundle\") pod \"0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7\" (UID: \"e3246024-d919-4ff6-abf7-2779f6622954\") " pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.809101 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3246024-d919-4ff6-abf7-2779f6622954-util\") pod \"0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7\" (UID: \"e3246024-d919-4ff6-abf7-2779f6622954\") " pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.834897 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlc56\" (UniqueName: \"kubernetes.io/projected/e3246024-d919-4ff6-abf7-2779f6622954-kube-api-access-tlc56\") pod \"0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7\" (UID: \"e3246024-d919-4ff6-abf7-2779f6622954\") " pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:23 crc kubenswrapper[4769]: I1006 07:28:23.863885 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:24 crc kubenswrapper[4769]: I1006 07:28:24.246969 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7"] Oct 06 07:28:24 crc kubenswrapper[4769]: I1006 07:28:24.583311 4769 generic.go:334] "Generic (PLEG): container finished" podID="e3246024-d919-4ff6-abf7-2779f6622954" containerID="828a1212907439e0247c910b390c6341f7241884178d56a59d8aa005c2b90e99" exitCode=0 Oct 06 07:28:24 crc kubenswrapper[4769]: I1006 07:28:24.583352 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" event={"ID":"e3246024-d919-4ff6-abf7-2779f6622954","Type":"ContainerDied","Data":"828a1212907439e0247c910b390c6341f7241884178d56a59d8aa005c2b90e99"} Oct 06 07:28:24 crc kubenswrapper[4769]: I1006 07:28:24.583374 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" event={"ID":"e3246024-d919-4ff6-abf7-2779f6622954","Type":"ContainerStarted","Data":"e64b979bf71101a60abd4260177bd17d3a74b6c396314ab2106f3465d51ba721"} Oct 06 07:28:25 crc kubenswrapper[4769]: I1006 07:28:25.591849 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" event={"ID":"e3246024-d919-4ff6-abf7-2779f6622954","Type":"ContainerStarted","Data":"b0ac3446765843f06fc513158c9aaee97b774c42a9dc02dea30529a6760969f3"} Oct 06 07:28:26 crc kubenswrapper[4769]: I1006 07:28:26.601336 4769 generic.go:334] "Generic (PLEG): container finished" podID="e3246024-d919-4ff6-abf7-2779f6622954" containerID="b0ac3446765843f06fc513158c9aaee97b774c42a9dc02dea30529a6760969f3" exitCode=0 Oct 06 07:28:26 crc kubenswrapper[4769]: I1006 07:28:26.601462 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" event={"ID":"e3246024-d919-4ff6-abf7-2779f6622954","Type":"ContainerDied","Data":"b0ac3446765843f06fc513158c9aaee97b774c42a9dc02dea30529a6760969f3"} Oct 06 07:28:27 crc kubenswrapper[4769]: I1006 07:28:27.609967 4769 generic.go:334] "Generic (PLEG): container finished" podID="e3246024-d919-4ff6-abf7-2779f6622954" containerID="42307b287285fd46dcd2e1098ade96f777935a5470bbb66b032ed1d5b72e716a" exitCode=0 Oct 06 07:28:27 crc kubenswrapper[4769]: I1006 07:28:27.610040 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" event={"ID":"e3246024-d919-4ff6-abf7-2779f6622954","Type":"ContainerDied","Data":"42307b287285fd46dcd2e1098ade96f777935a5470bbb66b032ed1d5b72e716a"} Oct 06 07:28:28 crc kubenswrapper[4769]: I1006 07:28:28.998267 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:29 crc kubenswrapper[4769]: I1006 07:28:29.074477 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlc56\" (UniqueName: \"kubernetes.io/projected/e3246024-d919-4ff6-abf7-2779f6622954-kube-api-access-tlc56\") pod \"e3246024-d919-4ff6-abf7-2779f6622954\" (UID: \"e3246024-d919-4ff6-abf7-2779f6622954\") " Oct 06 07:28:29 crc kubenswrapper[4769]: I1006 07:28:29.074601 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3246024-d919-4ff6-abf7-2779f6622954-bundle\") pod \"e3246024-d919-4ff6-abf7-2779f6622954\" (UID: \"e3246024-d919-4ff6-abf7-2779f6622954\") " Oct 06 07:28:29 crc kubenswrapper[4769]: I1006 07:28:29.074637 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3246024-d919-4ff6-abf7-2779f6622954-util\") pod \"e3246024-d919-4ff6-abf7-2779f6622954\" (UID: \"e3246024-d919-4ff6-abf7-2779f6622954\") " Oct 06 07:28:29 crc kubenswrapper[4769]: I1006 07:28:29.075678 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3246024-d919-4ff6-abf7-2779f6622954-bundle" (OuterVolumeSpecName: "bundle") pod "e3246024-d919-4ff6-abf7-2779f6622954" (UID: "e3246024-d919-4ff6-abf7-2779f6622954"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:28:29 crc kubenswrapper[4769]: I1006 07:28:29.082391 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3246024-d919-4ff6-abf7-2779f6622954-kube-api-access-tlc56" (OuterVolumeSpecName: "kube-api-access-tlc56") pod "e3246024-d919-4ff6-abf7-2779f6622954" (UID: "e3246024-d919-4ff6-abf7-2779f6622954"). InnerVolumeSpecName "kube-api-access-tlc56". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:28:29 crc kubenswrapper[4769]: I1006 07:28:29.095790 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3246024-d919-4ff6-abf7-2779f6622954-util" (OuterVolumeSpecName: "util") pod "e3246024-d919-4ff6-abf7-2779f6622954" (UID: "e3246024-d919-4ff6-abf7-2779f6622954"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:28:29 crc kubenswrapper[4769]: I1006 07:28:29.176194 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlc56\" (UniqueName: \"kubernetes.io/projected/e3246024-d919-4ff6-abf7-2779f6622954-kube-api-access-tlc56\") on node \"crc\" DevicePath \"\"" Oct 06 07:28:29 crc kubenswrapper[4769]: I1006 07:28:29.176390 4769 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e3246024-d919-4ff6-abf7-2779f6622954-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:28:29 crc kubenswrapper[4769]: I1006 07:28:29.176489 4769 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e3246024-d919-4ff6-abf7-2779f6622954-util\") on node \"crc\" DevicePath \"\"" Oct 06 07:28:29 crc kubenswrapper[4769]: I1006 07:28:29.632827 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" event={"ID":"e3246024-d919-4ff6-abf7-2779f6622954","Type":"ContainerDied","Data":"e64b979bf71101a60abd4260177bd17d3a74b6c396314ab2106f3465d51ba721"} Oct 06 07:28:29 crc kubenswrapper[4769]: I1006 07:28:29.632876 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e64b979bf71101a60abd4260177bd17d3a74b6c396314ab2106f3465d51ba721" Oct 06 07:28:29 crc kubenswrapper[4769]: I1006 07:28:29.633035 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7" Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.242031 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s"] Oct 06 07:28:36 crc kubenswrapper[4769]: E1006 07:28:36.242813 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3246024-d919-4ff6-abf7-2779f6622954" containerName="util" Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.242826 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3246024-d919-4ff6-abf7-2779f6622954" containerName="util" Oct 06 07:28:36 crc kubenswrapper[4769]: E1006 07:28:36.242840 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3246024-d919-4ff6-abf7-2779f6622954" containerName="pull" Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.242846 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3246024-d919-4ff6-abf7-2779f6622954" containerName="pull" Oct 06 07:28:36 crc kubenswrapper[4769]: E1006 07:28:36.242868 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3246024-d919-4ff6-abf7-2779f6622954" containerName="extract" Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.242874 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3246024-d919-4ff6-abf7-2779f6622954" containerName="extract" Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.242976 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3246024-d919-4ff6-abf7-2779f6622954" containerName="extract" Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.243567 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s" Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.248111 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-gczvj" Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.357318 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s"] Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.368943 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66fs6\" (UniqueName: \"kubernetes.io/projected/fc67c917-8b2a-47b6-9232-f80ccd98c13d-kube-api-access-66fs6\") pod \"openstack-operator-controller-operator-677d5bb784-lbb9s\" (UID: \"fc67c917-8b2a-47b6-9232-f80ccd98c13d\") " pod="openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s" Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.470090 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66fs6\" (UniqueName: \"kubernetes.io/projected/fc67c917-8b2a-47b6-9232-f80ccd98c13d-kube-api-access-66fs6\") pod \"openstack-operator-controller-operator-677d5bb784-lbb9s\" (UID: \"fc67c917-8b2a-47b6-9232-f80ccd98c13d\") " pod="openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s" Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.488049 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66fs6\" (UniqueName: \"kubernetes.io/projected/fc67c917-8b2a-47b6-9232-f80ccd98c13d-kube-api-access-66fs6\") pod \"openstack-operator-controller-operator-677d5bb784-lbb9s\" (UID: \"fc67c917-8b2a-47b6-9232-f80ccd98c13d\") " pod="openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s" Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.559181 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s" Oct 06 07:28:36 crc kubenswrapper[4769]: I1006 07:28:36.981871 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s"] Oct 06 07:28:36 crc kubenswrapper[4769]: W1006 07:28:36.986944 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc67c917_8b2a_47b6_9232_f80ccd98c13d.slice/crio-16e5f50a28eda7d946647d4482e3e5d9c55b8decfb0c2f569c0ad2be59a695a4 WatchSource:0}: Error finding container 16e5f50a28eda7d946647d4482e3e5d9c55b8decfb0c2f569c0ad2be59a695a4: Status 404 returned error can't find the container with id 16e5f50a28eda7d946647d4482e3e5d9c55b8decfb0c2f569c0ad2be59a695a4 Oct 06 07:28:37 crc kubenswrapper[4769]: I1006 07:28:37.685685 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s" event={"ID":"fc67c917-8b2a-47b6-9232-f80ccd98c13d","Type":"ContainerStarted","Data":"16e5f50a28eda7d946647d4482e3e5d9c55b8decfb0c2f569c0ad2be59a695a4"} Oct 06 07:28:41 crc kubenswrapper[4769]: I1006 07:28:41.711798 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s" event={"ID":"fc67c917-8b2a-47b6-9232-f80ccd98c13d","Type":"ContainerStarted","Data":"6769d80dcf013882862254703b6475223ec3590deed9f61d7eadad7de2494748"} Oct 06 07:28:43 crc kubenswrapper[4769]: I1006 07:28:43.722442 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s" event={"ID":"fc67c917-8b2a-47b6-9232-f80ccd98c13d","Type":"ContainerStarted","Data":"f8634b5d3f5fd8404083aa3b00218f5d1717697f9243bfb288f66252f5025e7b"} Oct 06 07:28:43 crc kubenswrapper[4769]: I1006 07:28:43.722585 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s" Oct 06 07:28:43 crc kubenswrapper[4769]: I1006 07:28:43.754766 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s" podStartSLOduration=1.463052909 podStartE2EDuration="7.754747079s" podCreationTimestamp="2025-10-06 07:28:36 +0000 UTC" firstStartedPulling="2025-10-06 07:28:36.989004637 +0000 UTC m=+713.513285784" lastFinishedPulling="2025-10-06 07:28:43.280698807 +0000 UTC m=+719.804979954" observedRunningTime="2025-10-06 07:28:43.749044323 +0000 UTC m=+720.273325480" watchObservedRunningTime="2025-10-06 07:28:43.754747079 +0000 UTC m=+720.279028226" Oct 06 07:28:46 crc kubenswrapper[4769]: I1006 07:28:46.562776 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-677d5bb784-lbb9s" Oct 06 07:28:52 crc kubenswrapper[4769]: I1006 07:28:52.245575 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:28:52 crc kubenswrapper[4769]: I1006 07:28:52.245895 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:29:12 crc kubenswrapper[4769]: I1006 07:29:12.846955 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bc478"] Oct 06 07:29:12 crc kubenswrapper[4769]: I1006 07:29:12.847534 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" podUID="a77a2ffb-9393-4cd9-9162-50678c269f57" containerName="controller-manager" containerID="cri-o://fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b" gracePeriod=30 Oct 06 07:29:12 crc kubenswrapper[4769]: E1006 07:29:12.906199 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda77a2ffb_9393_4cd9_9162_50678c269f57.slice/crio-conmon-fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b.scope\": RecentStats: unable to find data in memory cache]" Oct 06 07:29:12 crc kubenswrapper[4769]: I1006 07:29:12.970493 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m"] Oct 06 07:29:12 crc kubenswrapper[4769]: I1006 07:29:12.970717 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" podUID="0aaffef3-8bf8-4efc-8800-2db670031b2e" containerName="route-controller-manager" containerID="cri-o://ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5" gracePeriod=30 Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.407531 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.411811 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.509606 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-config\") pod \"a77a2ffb-9393-4cd9-9162-50678c269f57\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.509691 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-657fz\" (UniqueName: \"kubernetes.io/projected/a77a2ffb-9393-4cd9-9162-50678c269f57-kube-api-access-657fz\") pod \"a77a2ffb-9393-4cd9-9162-50678c269f57\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.509727 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a77a2ffb-9393-4cd9-9162-50678c269f57-serving-cert\") pod \"a77a2ffb-9393-4cd9-9162-50678c269f57\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.509767 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-proxy-ca-bundles\") pod \"a77a2ffb-9393-4cd9-9162-50678c269f57\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.509820 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-client-ca\") pod \"a77a2ffb-9393-4cd9-9162-50678c269f57\" (UID: \"a77a2ffb-9393-4cd9-9162-50678c269f57\") " Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.510382 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-client-ca" (OuterVolumeSpecName: "client-ca") pod "a77a2ffb-9393-4cd9-9162-50678c269f57" (UID: "a77a2ffb-9393-4cd9-9162-50678c269f57"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.510441 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-config" (OuterVolumeSpecName: "config") pod "a77a2ffb-9393-4cd9-9162-50678c269f57" (UID: "a77a2ffb-9393-4cd9-9162-50678c269f57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.510850 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a77a2ffb-9393-4cd9-9162-50678c269f57" (UID: "a77a2ffb-9393-4cd9-9162-50678c269f57"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.515316 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77a2ffb-9393-4cd9-9162-50678c269f57-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a77a2ffb-9393-4cd9-9162-50678c269f57" (UID: "a77a2ffb-9393-4cd9-9162-50678c269f57"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.515627 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a77a2ffb-9393-4cd9-9162-50678c269f57-kube-api-access-657fz" (OuterVolumeSpecName: "kube-api-access-657fz") pod "a77a2ffb-9393-4cd9-9162-50678c269f57" (UID: "a77a2ffb-9393-4cd9-9162-50678c269f57"). InnerVolumeSpecName "kube-api-access-657fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.611315 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aaffef3-8bf8-4efc-8800-2db670031b2e-config\") pod \"0aaffef3-8bf8-4efc-8800-2db670031b2e\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.611378 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvldr\" (UniqueName: \"kubernetes.io/projected/0aaffef3-8bf8-4efc-8800-2db670031b2e-kube-api-access-zvldr\") pod \"0aaffef3-8bf8-4efc-8800-2db670031b2e\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.611490 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aaffef3-8bf8-4efc-8800-2db670031b2e-serving-cert\") pod \"0aaffef3-8bf8-4efc-8800-2db670031b2e\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.611537 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aaffef3-8bf8-4efc-8800-2db670031b2e-client-ca\") pod \"0aaffef3-8bf8-4efc-8800-2db670031b2e\" (UID: \"0aaffef3-8bf8-4efc-8800-2db670031b2e\") " Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.611754 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-657fz\" (UniqueName: \"kubernetes.io/projected/a77a2ffb-9393-4cd9-9162-50678c269f57-kube-api-access-657fz\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.611765 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a77a2ffb-9393-4cd9-9162-50678c269f57-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.611775 4769 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.611783 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.611791 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a77a2ffb-9393-4cd9-9162-50678c269f57-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.612166 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aaffef3-8bf8-4efc-8800-2db670031b2e-client-ca" (OuterVolumeSpecName: "client-ca") pod "0aaffef3-8bf8-4efc-8800-2db670031b2e" (UID: "0aaffef3-8bf8-4efc-8800-2db670031b2e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.612202 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aaffef3-8bf8-4efc-8800-2db670031b2e-config" (OuterVolumeSpecName: "config") pod "0aaffef3-8bf8-4efc-8800-2db670031b2e" (UID: "0aaffef3-8bf8-4efc-8800-2db670031b2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.614444 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aaffef3-8bf8-4efc-8800-2db670031b2e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0aaffef3-8bf8-4efc-8800-2db670031b2e" (UID: "0aaffef3-8bf8-4efc-8800-2db670031b2e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.614480 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aaffef3-8bf8-4efc-8800-2db670031b2e-kube-api-access-zvldr" (OuterVolumeSpecName: "kube-api-access-zvldr") pod "0aaffef3-8bf8-4efc-8800-2db670031b2e" (UID: "0aaffef3-8bf8-4efc-8800-2db670031b2e"). InnerVolumeSpecName "kube-api-access-zvldr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.712478 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0aaffef3-8bf8-4efc-8800-2db670031b2e-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.712512 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aaffef3-8bf8-4efc-8800-2db670031b2e-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.712524 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvldr\" (UniqueName: \"kubernetes.io/projected/0aaffef3-8bf8-4efc-8800-2db670031b2e-kube-api-access-zvldr\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.712536 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0aaffef3-8bf8-4efc-8800-2db670031b2e-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.930681 4769 generic.go:334] "Generic (PLEG): container finished" podID="a77a2ffb-9393-4cd9-9162-50678c269f57" containerID="fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b" exitCode=0 Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.930748 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.930754 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" event={"ID":"a77a2ffb-9393-4cd9-9162-50678c269f57","Type":"ContainerDied","Data":"fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b"} Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.930870 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bc478" event={"ID":"a77a2ffb-9393-4cd9-9162-50678c269f57","Type":"ContainerDied","Data":"1aee6f10112c1646c29ee6e72fd2b910ea3fd5efa487b15e21ae1eb4cbf94ecd"} Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.930906 4769 scope.go:117] "RemoveContainer" containerID="fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.931899 4769 generic.go:334] "Generic (PLEG): container finished" podID="0aaffef3-8bf8-4efc-8800-2db670031b2e" containerID="ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5" exitCode=0 Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.931924 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" event={"ID":"0aaffef3-8bf8-4efc-8800-2db670031b2e","Type":"ContainerDied","Data":"ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5"} Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.931949 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" event={"ID":"0aaffef3-8bf8-4efc-8800-2db670031b2e","Type":"ContainerDied","Data":"82b1938541d36057784bcc180875d46f2b9ae2675cbf76090626c5c27f894014"} Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.931986 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.949179 4769 scope.go:117] "RemoveContainer" containerID="fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b" Oct 06 07:29:13 crc kubenswrapper[4769]: E1006 07:29:13.949575 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b\": container with ID starting with fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b not found: ID does not exist" containerID="fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.949610 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b"} err="failed to get container status \"fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b\": rpc error: code = NotFound desc = could not find container \"fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b\": container with ID starting with fb065e0885e097ab167b03539637984455386f131b1afbdc050db6bcbb262c8b not found: ID does not exist" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.949631 4769 scope.go:117] "RemoveContainer" containerID="ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.958494 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bc478"] Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.963554 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bc478"] Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.972066 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m"] Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.972874 4769 scope.go:117] "RemoveContainer" containerID="ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5" Oct 06 07:29:13 crc kubenswrapper[4769]: E1006 07:29:13.973286 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5\": container with ID starting with ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5 not found: ID does not exist" containerID="ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.973318 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5"} err="failed to get container status \"ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5\": rpc error: code = NotFound desc = could not find container \"ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5\": container with ID starting with ef44a54d9c02a2f9b34dcc51fc1bd412041c192b0c5b8dbbef9533ae3cab6fd5 not found: ID does not exist" Oct 06 07:29:13 crc kubenswrapper[4769]: I1006 07:29:13.984192 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-rsc4m"] Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.173060 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aaffef3-8bf8-4efc-8800-2db670031b2e" path="/var/lib/kubelet/pods/0aaffef3-8bf8-4efc-8800-2db670031b2e/volumes" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.173572 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a77a2ffb-9393-4cd9-9162-50678c269f57" path="/var/lib/kubelet/pods/a77a2ffb-9393-4cd9-9162-50678c269f57/volumes" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.200360 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg"] Oct 06 07:29:14 crc kubenswrapper[4769]: E1006 07:29:14.200646 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77a2ffb-9393-4cd9-9162-50678c269f57" containerName="controller-manager" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.200665 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77a2ffb-9393-4cd9-9162-50678c269f57" containerName="controller-manager" Oct 06 07:29:14 crc kubenswrapper[4769]: E1006 07:29:14.200692 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aaffef3-8bf8-4efc-8800-2db670031b2e" containerName="route-controller-manager" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.200701 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aaffef3-8bf8-4efc-8800-2db670031b2e" containerName="route-controller-manager" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.200834 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a77a2ffb-9393-4cd9-9162-50678c269f57" containerName="controller-manager" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.200854 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aaffef3-8bf8-4efc-8800-2db670031b2e" containerName="route-controller-manager" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.201305 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.203712 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.204868 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75c6bfc578-xvfsk"] Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.205603 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.205691 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.205703 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.206563 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.206652 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.207578 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.207960 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.208319 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.209006 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.209688 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.213173 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.213185 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.216686 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q9wz\" (UniqueName: \"kubernetes.io/projected/7920f182-4178-4914-a5e3-872886bddd8a-kube-api-access-4q9wz\") pod \"route-controller-manager-5bb97dbf74-6v2sg\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.216726 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7668bcaa-6b0a-4320-aef2-2d419f922b30-proxy-ca-bundles\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.216759 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7668bcaa-6b0a-4320-aef2-2d419f922b30-serving-cert\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.216784 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7920f182-4178-4914-a5e3-872886bddd8a-serving-cert\") pod \"route-controller-manager-5bb97dbf74-6v2sg\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.216832 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7668bcaa-6b0a-4320-aef2-2d419f922b30-config\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.216858 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7920f182-4178-4914-a5e3-872886bddd8a-client-ca\") pod \"route-controller-manager-5bb97dbf74-6v2sg\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.216937 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7668bcaa-6b0a-4320-aef2-2d419f922b30-client-ca\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.216996 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7920f182-4178-4914-a5e3-872886bddd8a-config\") pod \"route-controller-manager-5bb97dbf74-6v2sg\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.217053 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dm5j\" (UniqueName: \"kubernetes.io/projected/7668bcaa-6b0a-4320-aef2-2d419f922b30-kube-api-access-2dm5j\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.220638 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg"] Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.222621 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.225656 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75c6bfc578-xvfsk"] Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.318172 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dm5j\" (UniqueName: \"kubernetes.io/projected/7668bcaa-6b0a-4320-aef2-2d419f922b30-kube-api-access-2dm5j\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.318257 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q9wz\" (UniqueName: \"kubernetes.io/projected/7920f182-4178-4914-a5e3-872886bddd8a-kube-api-access-4q9wz\") pod \"route-controller-manager-5bb97dbf74-6v2sg\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.318300 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7668bcaa-6b0a-4320-aef2-2d419f922b30-proxy-ca-bundles\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.318329 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7668bcaa-6b0a-4320-aef2-2d419f922b30-serving-cert\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.318351 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7920f182-4178-4914-a5e3-872886bddd8a-serving-cert\") pod \"route-controller-manager-5bb97dbf74-6v2sg\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.318403 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7668bcaa-6b0a-4320-aef2-2d419f922b30-config\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.318447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7920f182-4178-4914-a5e3-872886bddd8a-client-ca\") pod \"route-controller-manager-5bb97dbf74-6v2sg\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.318474 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7668bcaa-6b0a-4320-aef2-2d419f922b30-client-ca\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.318494 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7920f182-4178-4914-a5e3-872886bddd8a-config\") pod \"route-controller-manager-5bb97dbf74-6v2sg\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.319613 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7668bcaa-6b0a-4320-aef2-2d419f922b30-proxy-ca-bundles\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.319625 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7668bcaa-6b0a-4320-aef2-2d419f922b30-client-ca\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.319966 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7920f182-4178-4914-a5e3-872886bddd8a-config\") pod \"route-controller-manager-5bb97dbf74-6v2sg\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.320037 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7668bcaa-6b0a-4320-aef2-2d419f922b30-config\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.320224 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7920f182-4178-4914-a5e3-872886bddd8a-client-ca\") pod \"route-controller-manager-5bb97dbf74-6v2sg\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.322023 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7920f182-4178-4914-a5e3-872886bddd8a-serving-cert\") pod \"route-controller-manager-5bb97dbf74-6v2sg\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.322041 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7668bcaa-6b0a-4320-aef2-2d419f922b30-serving-cert\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.333743 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dm5j\" (UniqueName: \"kubernetes.io/projected/7668bcaa-6b0a-4320-aef2-2d419f922b30-kube-api-access-2dm5j\") pod \"controller-manager-75c6bfc578-xvfsk\" (UID: \"7668bcaa-6b0a-4320-aef2-2d419f922b30\") " pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.335470 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q9wz\" (UniqueName: \"kubernetes.io/projected/7920f182-4178-4914-a5e3-872886bddd8a-kube-api-access-4q9wz\") pod \"route-controller-manager-5bb97dbf74-6v2sg\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.526324 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.540938 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.727714 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg"] Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.776044 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75c6bfc578-xvfsk"] Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.804923 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg"] Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.939576 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" event={"ID":"7668bcaa-6b0a-4320-aef2-2d419f922b30","Type":"ContainerStarted","Data":"1f996ee6947232cf02a4eb84274d19ded5d4f3c93aba2d5f81fee8eb7f48d101"} Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.939779 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" event={"ID":"7668bcaa-6b0a-4320-aef2-2d419f922b30","Type":"ContainerStarted","Data":"dd5a6c6f124e8a00141519773ab330c357addb855d1025b9ab1b51248988ba67"} Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.940168 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.941195 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" event={"ID":"7920f182-4178-4914-a5e3-872886bddd8a","Type":"ContainerStarted","Data":"5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e"} Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.941230 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" event={"ID":"7920f182-4178-4914-a5e3-872886bddd8a","Type":"ContainerStarted","Data":"dfe4502ee341a4badda8e4fca384c2e1b152872dc202a8d4593d40a5796d95df"} Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.941287 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" podUID="7920f182-4178-4914-a5e3-872886bddd8a" containerName="route-controller-manager" containerID="cri-o://5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e" gracePeriod=30 Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.941345 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.942375 4769 patch_prober.go:28] interesting pod/controller-manager-75c6bfc578-xvfsk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.942408 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" podUID="7668bcaa-6b0a-4320-aef2-2d419f922b30" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.942626 4769 patch_prober.go:28] interesting pod/route-controller-manager-5bb97dbf74-6v2sg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.942659 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" podUID="7920f182-4178-4914-a5e3-872886bddd8a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.961550 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" podStartSLOduration=1.961533061 podStartE2EDuration="1.961533061s" podCreationTimestamp="2025-10-06 07:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:29:14.960461532 +0000 UTC m=+751.484742679" watchObservedRunningTime="2025-10-06 07:29:14.961533061 +0000 UTC m=+751.485814208" Oct 06 07:29:14 crc kubenswrapper[4769]: I1006 07:29:14.979664 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" podStartSLOduration=2.979646217 podStartE2EDuration="2.979646217s" podCreationTimestamp="2025-10-06 07:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:29:14.977053566 +0000 UTC m=+751.501334713" watchObservedRunningTime="2025-10-06 07:29:14.979646217 +0000 UTC m=+751.503927364" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.314691 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5bb97dbf74-6v2sg_7920f182-4178-4914-a5e3-872886bddd8a/route-controller-manager/0.log" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.314750 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.440836 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q9wz\" (UniqueName: \"kubernetes.io/projected/7920f182-4178-4914-a5e3-872886bddd8a-kube-api-access-4q9wz\") pod \"7920f182-4178-4914-a5e3-872886bddd8a\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.440878 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7920f182-4178-4914-a5e3-872886bddd8a-config\") pod \"7920f182-4178-4914-a5e3-872886bddd8a\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.440923 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7920f182-4178-4914-a5e3-872886bddd8a-client-ca\") pod \"7920f182-4178-4914-a5e3-872886bddd8a\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.440946 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7920f182-4178-4914-a5e3-872886bddd8a-serving-cert\") pod \"7920f182-4178-4914-a5e3-872886bddd8a\" (UID: \"7920f182-4178-4914-a5e3-872886bddd8a\") " Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.441701 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7920f182-4178-4914-a5e3-872886bddd8a-client-ca" (OuterVolumeSpecName: "client-ca") pod "7920f182-4178-4914-a5e3-872886bddd8a" (UID: "7920f182-4178-4914-a5e3-872886bddd8a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.441809 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7920f182-4178-4914-a5e3-872886bddd8a-config" (OuterVolumeSpecName: "config") pod "7920f182-4178-4914-a5e3-872886bddd8a" (UID: "7920f182-4178-4914-a5e3-872886bddd8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.449518 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7920f182-4178-4914-a5e3-872886bddd8a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7920f182-4178-4914-a5e3-872886bddd8a" (UID: "7920f182-4178-4914-a5e3-872886bddd8a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.455152 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7920f182-4178-4914-a5e3-872886bddd8a-kube-api-access-4q9wz" (OuterVolumeSpecName: "kube-api-access-4q9wz") pod "7920f182-4178-4914-a5e3-872886bddd8a" (UID: "7920f182-4178-4914-a5e3-872886bddd8a"). InnerVolumeSpecName "kube-api-access-4q9wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.542354 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q9wz\" (UniqueName: \"kubernetes.io/projected/7920f182-4178-4914-a5e3-872886bddd8a-kube-api-access-4q9wz\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.542389 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7920f182-4178-4914-a5e3-872886bddd8a-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.542398 4769 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7920f182-4178-4914-a5e3-872886bddd8a-client-ca\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.542407 4769 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7920f182-4178-4914-a5e3-872886bddd8a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.957806 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5bb97dbf74-6v2sg_7920f182-4178-4914-a5e3-872886bddd8a/route-controller-manager/0.log" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.957864 4769 generic.go:334] "Generic (PLEG): container finished" podID="7920f182-4178-4914-a5e3-872886bddd8a" containerID="5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e" exitCode=2 Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.957913 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" event={"ID":"7920f182-4178-4914-a5e3-872886bddd8a","Type":"ContainerDied","Data":"5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e"} Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.957973 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" event={"ID":"7920f182-4178-4914-a5e3-872886bddd8a","Type":"ContainerDied","Data":"dfe4502ee341a4badda8e4fca384c2e1b152872dc202a8d4593d40a5796d95df"} Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.957991 4769 scope.go:117] "RemoveContainer" containerID="5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.957941 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.967579 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75c6bfc578-xvfsk" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.973854 4769 scope.go:117] "RemoveContainer" containerID="5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e" Oct 06 07:29:15 crc kubenswrapper[4769]: E1006 07:29:15.974979 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e\": container with ID starting with 5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e not found: ID does not exist" containerID="5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e" Oct 06 07:29:15 crc kubenswrapper[4769]: I1006 07:29:15.975025 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e"} err="failed to get container status \"5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e\": rpc error: code = NotFound desc = could not find container \"5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e\": container with ID starting with 5828e6f159528237e412fb01c151e7029219cf77969f5196cca083c2fac1f26e not found: ID does not exist" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.000880 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg"] Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.054644 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5bb97dbf74-6v2sg"] Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.174321 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7920f182-4178-4914-a5e3-872886bddd8a" path="/var/lib/kubelet/pods/7920f182-4178-4914-a5e3-872886bddd8a/volumes" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.197146 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp"] Oct 06 07:29:16 crc kubenswrapper[4769]: E1006 07:29:16.197410 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7920f182-4178-4914-a5e3-872886bddd8a" containerName="route-controller-manager" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.197447 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7920f182-4178-4914-a5e3-872886bddd8a" containerName="route-controller-manager" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.197575 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7920f182-4178-4914-a5e3-872886bddd8a" containerName="route-controller-manager" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.197979 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.199398 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.199956 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.203314 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.203358 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.203996 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.205094 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.208116 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp"] Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.254979 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28jts\" (UniqueName: \"kubernetes.io/projected/0a5dbf67-ddbc-487f-b996-9d316a1476d5-kube-api-access-28jts\") pod \"route-controller-manager-6c5775496d-p6wvp\" (UID: \"0a5dbf67-ddbc-487f-b996-9d316a1476d5\") " pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.255041 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5dbf67-ddbc-487f-b996-9d316a1476d5-client-ca\") pod \"route-controller-manager-6c5775496d-p6wvp\" (UID: \"0a5dbf67-ddbc-487f-b996-9d316a1476d5\") " pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.255266 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5dbf67-ddbc-487f-b996-9d316a1476d5-serving-cert\") pod \"route-controller-manager-6c5775496d-p6wvp\" (UID: \"0a5dbf67-ddbc-487f-b996-9d316a1476d5\") " pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.255304 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5dbf67-ddbc-487f-b996-9d316a1476d5-config\") pod \"route-controller-manager-6c5775496d-p6wvp\" (UID: \"0a5dbf67-ddbc-487f-b996-9d316a1476d5\") " pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.356668 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5dbf67-ddbc-487f-b996-9d316a1476d5-config\") pod \"route-controller-manager-6c5775496d-p6wvp\" (UID: \"0a5dbf67-ddbc-487f-b996-9d316a1476d5\") " pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.356758 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28jts\" (UniqueName: \"kubernetes.io/projected/0a5dbf67-ddbc-487f-b996-9d316a1476d5-kube-api-access-28jts\") pod \"route-controller-manager-6c5775496d-p6wvp\" (UID: \"0a5dbf67-ddbc-487f-b996-9d316a1476d5\") " pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.356799 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5dbf67-ddbc-487f-b996-9d316a1476d5-client-ca\") pod \"route-controller-manager-6c5775496d-p6wvp\" (UID: \"0a5dbf67-ddbc-487f-b996-9d316a1476d5\") " pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.356862 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5dbf67-ddbc-487f-b996-9d316a1476d5-serving-cert\") pod \"route-controller-manager-6c5775496d-p6wvp\" (UID: \"0a5dbf67-ddbc-487f-b996-9d316a1476d5\") " pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.359860 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a5dbf67-ddbc-487f-b996-9d316a1476d5-config\") pod \"route-controller-manager-6c5775496d-p6wvp\" (UID: \"0a5dbf67-ddbc-487f-b996-9d316a1476d5\") " pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.360452 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a5dbf67-ddbc-487f-b996-9d316a1476d5-client-ca\") pod \"route-controller-manager-6c5775496d-p6wvp\" (UID: \"0a5dbf67-ddbc-487f-b996-9d316a1476d5\") " pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.361687 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5dbf67-ddbc-487f-b996-9d316a1476d5-serving-cert\") pod \"route-controller-manager-6c5775496d-p6wvp\" (UID: \"0a5dbf67-ddbc-487f-b996-9d316a1476d5\") " pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.378253 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28jts\" (UniqueName: \"kubernetes.io/projected/0a5dbf67-ddbc-487f-b996-9d316a1476d5-kube-api-access-28jts\") pod \"route-controller-manager-6c5775496d-p6wvp\" (UID: \"0a5dbf67-ddbc-487f-b996-9d316a1476d5\") " pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.514494 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.902548 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp"] Oct 06 07:29:16 crc kubenswrapper[4769]: W1006 07:29:16.911090 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5dbf67_ddbc_487f_b996_9d316a1476d5.slice/crio-1c0739e3bb285afd5d5ced1fe03efa70c18f6e77ef4503a8349c2c17a0304389 WatchSource:0}: Error finding container 1c0739e3bb285afd5d5ced1fe03efa70c18f6e77ef4503a8349c2c17a0304389: Status 404 returned error can't find the container with id 1c0739e3bb285afd5d5ced1fe03efa70c18f6e77ef4503a8349c2c17a0304389 Oct 06 07:29:16 crc kubenswrapper[4769]: I1006 07:29:16.965206 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" event={"ID":"0a5dbf67-ddbc-487f-b996-9d316a1476d5","Type":"ContainerStarted","Data":"1c0739e3bb285afd5d5ced1fe03efa70c18f6e77ef4503a8349c2c17a0304389"} Oct 06 07:29:17 crc kubenswrapper[4769]: I1006 07:29:17.975169 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" event={"ID":"0a5dbf67-ddbc-487f-b996-9d316a1476d5","Type":"ContainerStarted","Data":"dce89a07ff6412c6c2906c2f665031913a51f4f7d67291372578fa1fc91bfa0b"} Oct 06 07:29:17 crc kubenswrapper[4769]: I1006 07:29:17.975599 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:17 crc kubenswrapper[4769]: I1006 07:29:17.983016 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" Oct 06 07:29:17 crc kubenswrapper[4769]: I1006 07:29:17.998881 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c5775496d-p6wvp" podStartSLOduration=3.998858778 podStartE2EDuration="3.998858778s" podCreationTimestamp="2025-10-06 07:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:29:17.996459902 +0000 UTC m=+754.520741069" watchObservedRunningTime="2025-10-06 07:29:17.998858778 +0000 UTC m=+754.523139955" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.084053 4769 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.344522 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.345516 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.347199 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-z59vx" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.352410 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.353306 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.355641 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4hmk7" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.361604 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.371660 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.396967 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.398290 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.402473 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jppb4" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.406879 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.410573 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.413288 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-p2knv" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.426874 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.429243 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.433754 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-x99gk" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.456771 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.457677 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.464859 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zdwxq" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.475700 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.477265 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.480401 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.481759 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jvbz5" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.501479 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.503130 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.522627 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pntng\" (UniqueName: \"kubernetes.io/projected/362f2d56-78f2-4179-842a-cc3b7e77b8bf-kube-api-access-pntng\") pod \"glance-operator-controller-manager-698456cdc6-h2gcm\" (UID: \"362f2d56-78f2-4179-842a-cc3b7e77b8bf\") " pod="openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.522671 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsl92\" (UniqueName: \"kubernetes.io/projected/82e41047-a211-4e78-89bd-3c3b6fbf17c6-kube-api-access-zsl92\") pod \"barbican-operator-controller-manager-5b974f6766-q767n\" (UID: \"82e41047-a211-4e78-89bd-3c3b6fbf17c6\") " pod="openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.522696 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8qrm\" (UniqueName: \"kubernetes.io/projected/2861ed50-c4c4-4ce6-84bf-975f1ca89fd7-kube-api-access-h8qrm\") pod \"cinder-operator-controller-manager-84bd8f6848-9k7gr\" (UID: \"2861ed50-c4c4-4ce6-84bf-975f1ca89fd7\") " pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.522720 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzr9g\" (UniqueName: \"kubernetes.io/projected/702ea655-638e-4db1-bf92-4531efcb7728-kube-api-access-kzr9g\") pod \"designate-operator-controller-manager-58d86cd59d-phrqc\" (UID: \"702ea655-638e-4db1-bf92-4531efcb7728\") " pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.522739 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbphd\" (UniqueName: \"kubernetes.io/projected/5c160dde-ed5f-4e2e-89f8-c793ee675f54-kube-api-access-sbphd\") pod \"horizon-operator-controller-manager-6675647785-zwnqv\" (UID: \"5c160dde-ed5f-4e2e-89f8-c793ee675f54\") " pod="openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.522828 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7hvr\" (UniqueName: \"kubernetes.io/projected/b13c6907-c560-4be1-a0b6-5bb302f02f3e-kube-api-access-k7hvr\") pod \"heat-operator-controller-manager-5c497dbdb-tg9wh\" (UID: \"b13c6907-c560-4be1-a0b6-5bb302f02f3e\") " pod="openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.522877 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk6zz\" (UniqueName: \"kubernetes.io/projected/d27cfd7b-7bb0-462e-b6c1-c7537b34511a-kube-api-access-tk6zz\") pod \"infra-operator-controller-manager-84788b6bc5-q7mbs\" (UID: \"d27cfd7b-7bb0-462e-b6c1-c7537b34511a\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.522903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d27cfd7b-7bb0-462e-b6c1-c7537b34511a-cert\") pod \"infra-operator-controller-manager-84788b6bc5-q7mbs\" (UID: \"d27cfd7b-7bb0-462e-b6c1-c7537b34511a\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.524162 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.525031 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.531136 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.532092 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.534890 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-gc552" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.537740 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-w2x7r" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.539093 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.540080 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.543633 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-nprss" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.559562 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.566269 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.573477 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.579001 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.616385 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.625712 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pntng\" (UniqueName: \"kubernetes.io/projected/362f2d56-78f2-4179-842a-cc3b7e77b8bf-kube-api-access-pntng\") pod \"glance-operator-controller-manager-698456cdc6-h2gcm\" (UID: \"362f2d56-78f2-4179-842a-cc3b7e77b8bf\") " pod="openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.625764 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsl92\" (UniqueName: \"kubernetes.io/projected/82e41047-a211-4e78-89bd-3c3b6fbf17c6-kube-api-access-zsl92\") pod \"barbican-operator-controller-manager-5b974f6766-q767n\" (UID: \"82e41047-a211-4e78-89bd-3c3b6fbf17c6\") " pod="openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.625793 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8qrm\" (UniqueName: \"kubernetes.io/projected/2861ed50-c4c4-4ce6-84bf-975f1ca89fd7-kube-api-access-h8qrm\") pod \"cinder-operator-controller-manager-84bd8f6848-9k7gr\" (UID: \"2861ed50-c4c4-4ce6-84bf-975f1ca89fd7\") " pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.625812 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzr9g\" (UniqueName: \"kubernetes.io/projected/702ea655-638e-4db1-bf92-4531efcb7728-kube-api-access-kzr9g\") pod \"designate-operator-controller-manager-58d86cd59d-phrqc\" (UID: \"702ea655-638e-4db1-bf92-4531efcb7728\") " pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.625831 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbphd\" (UniqueName: \"kubernetes.io/projected/5c160dde-ed5f-4e2e-89f8-c793ee675f54-kube-api-access-sbphd\") pod \"horizon-operator-controller-manager-6675647785-zwnqv\" (UID: \"5c160dde-ed5f-4e2e-89f8-c793ee675f54\") " pod="openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.625854 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7hvr\" (UniqueName: \"kubernetes.io/projected/b13c6907-c560-4be1-a0b6-5bb302f02f3e-kube-api-access-k7hvr\") pod \"heat-operator-controller-manager-5c497dbdb-tg9wh\" (UID: \"b13c6907-c560-4be1-a0b6-5bb302f02f3e\") " pod="openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.625895 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk6zz\" (UniqueName: \"kubernetes.io/projected/d27cfd7b-7bb0-462e-b6c1-c7537b34511a-kube-api-access-tk6zz\") pod \"infra-operator-controller-manager-84788b6bc5-q7mbs\" (UID: \"d27cfd7b-7bb0-462e-b6c1-c7537b34511a\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.625917 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d27cfd7b-7bb0-462e-b6c1-c7537b34511a-cert\") pod \"infra-operator-controller-manager-84788b6bc5-q7mbs\" (UID: \"d27cfd7b-7bb0-462e-b6c1-c7537b34511a\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" Oct 06 07:29:21 crc kubenswrapper[4769]: E1006 07:29:21.626046 4769 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Oct 06 07:29:21 crc kubenswrapper[4769]: E1006 07:29:21.626092 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d27cfd7b-7bb0-462e-b6c1-c7537b34511a-cert podName:d27cfd7b-7bb0-462e-b6c1-c7537b34511a nodeName:}" failed. No retries permitted until 2025-10-06 07:29:22.126077505 +0000 UTC m=+758.650358652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d27cfd7b-7bb0-462e-b6c1-c7537b34511a-cert") pod "infra-operator-controller-manager-84788b6bc5-q7mbs" (UID: "d27cfd7b-7bb0-462e-b6c1-c7537b34511a") : secret "infra-operator-webhook-server-cert" not found Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.642481 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.643519 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.644292 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.644525 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.649231 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-jwrc5" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.649495 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-pm5fz" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.668044 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8qrm\" (UniqueName: \"kubernetes.io/projected/2861ed50-c4c4-4ce6-84bf-975f1ca89fd7-kube-api-access-h8qrm\") pod \"cinder-operator-controller-manager-84bd8f6848-9k7gr\" (UID: \"2861ed50-c4c4-4ce6-84bf-975f1ca89fd7\") " pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.669250 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzr9g\" (UniqueName: \"kubernetes.io/projected/702ea655-638e-4db1-bf92-4531efcb7728-kube-api-access-kzr9g\") pod \"designate-operator-controller-manager-58d86cd59d-phrqc\" (UID: \"702ea655-638e-4db1-bf92-4531efcb7728\") " pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.673827 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.674384 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.675333 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7hvr\" (UniqueName: \"kubernetes.io/projected/b13c6907-c560-4be1-a0b6-5bb302f02f3e-kube-api-access-k7hvr\") pod \"heat-operator-controller-manager-5c497dbdb-tg9wh\" (UID: \"b13c6907-c560-4be1-a0b6-5bb302f02f3e\") " pod="openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.675877 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk6zz\" (UniqueName: \"kubernetes.io/projected/d27cfd7b-7bb0-462e-b6c1-c7537b34511a-kube-api-access-tk6zz\") pod \"infra-operator-controller-manager-84788b6bc5-q7mbs\" (UID: \"d27cfd7b-7bb0-462e-b6c1-c7537b34511a\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.683268 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsl92\" (UniqueName: \"kubernetes.io/projected/82e41047-a211-4e78-89bd-3c3b6fbf17c6-kube-api-access-zsl92\") pod \"barbican-operator-controller-manager-5b974f6766-q767n\" (UID: \"82e41047-a211-4e78-89bd-3c3b6fbf17c6\") " pod="openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.692515 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbphd\" (UniqueName: \"kubernetes.io/projected/5c160dde-ed5f-4e2e-89f8-c793ee675f54-kube-api-access-sbphd\") pod \"horizon-operator-controller-manager-6675647785-zwnqv\" (UID: \"5c160dde-ed5f-4e2e-89f8-c793ee675f54\") " pod="openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.696569 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.697945 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pntng\" (UniqueName: \"kubernetes.io/projected/362f2d56-78f2-4179-842a-cc3b7e77b8bf-kube-api-access-pntng\") pod \"glance-operator-controller-manager-698456cdc6-h2gcm\" (UID: \"362f2d56-78f2-4179-842a-cc3b7e77b8bf\") " pod="openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.703486 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.714915 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.715376 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.717139 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.720128 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-fkcbf" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.733993 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl7v5\" (UniqueName: \"kubernetes.io/projected/7e2d1e8d-04fd-4d79-a2b3-270e4f3384f8-kube-api-access-tl7v5\") pod \"neutron-operator-controller-manager-69b956fbf6-8n98x\" (UID: \"7e2d1e8d-04fd-4d79-a2b3-270e4f3384f8\") " pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.734059 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2gq8\" (UniqueName: \"kubernetes.io/projected/02615b46-f027-4aef-b28e-8c4b1c7e1d21-kube-api-access-z2gq8\") pod \"manila-operator-controller-manager-7cb48dbc-bq7vc\" (UID: \"02615b46-f027-4aef-b28e-8c4b1c7e1d21\") " pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.734098 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xkb9\" (UniqueName: \"kubernetes.io/projected/62b99e9f-d5e1-4cbd-9d75-e9f9938d0a0a-kube-api-access-8xkb9\") pod \"keystone-operator-controller-manager-57c9cdcf57-wjk8j\" (UID: \"62b99e9f-d5e1-4cbd-9d75-e9f9938d0a0a\") " pod="openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.734135 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rzpz\" (UniqueName: \"kubernetes.io/projected/e140b4bd-2692-432a-8f4c-78f4462df844-kube-api-access-7rzpz\") pod \"nova-operator-controller-manager-6c9b57c67-t6647\" (UID: \"e140b4bd-2692-432a-8f4c-78f4462df844\") " pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.734157 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwm9v\" (UniqueName: \"kubernetes.io/projected/0dcb449d-5590-431e-a8d5-7cb1f6c38a9d-kube-api-access-nwm9v\") pod \"ironic-operator-controller-manager-6f5894c49f-2djd8\" (UID: \"0dcb449d-5590-431e-a8d5-7cb1f6c38a9d\") " pod="openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.734173 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffdtw\" (UniqueName: \"kubernetes.io/projected/7562bfed-f13e-4197-8169-adbed0092212-kube-api-access-ffdtw\") pod \"mariadb-operator-controller-manager-d6c9dc5bc-9s2zz\" (UID: \"7562bfed-f13e-4197-8169-adbed0092212\") " pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.735990 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.742507 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.747896 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.754130 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.755481 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.757947 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-5ckwv" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.763608 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.769380 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.770916 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.779830 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.783082 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.786242 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2z57g" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.793496 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.794594 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.805885 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-tvtch" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.808333 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.810469 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.812528 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.817236 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-shmbl" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.820505 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.821603 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.832117 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.832177 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.837271 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwm9v\" (UniqueName: \"kubernetes.io/projected/0dcb449d-5590-431e-a8d5-7cb1f6c38a9d-kube-api-access-nwm9v\") pod \"ironic-operator-controller-manager-6f5894c49f-2djd8\" (UID: \"0dcb449d-5590-431e-a8d5-7cb1f6c38a9d\") " pod="openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.837312 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffdtw\" (UniqueName: \"kubernetes.io/projected/7562bfed-f13e-4197-8169-adbed0092212-kube-api-access-ffdtw\") pod \"mariadb-operator-controller-manager-d6c9dc5bc-9s2zz\" (UID: \"7562bfed-f13e-4197-8169-adbed0092212\") " pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.837373 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl7v5\" (UniqueName: \"kubernetes.io/projected/7e2d1e8d-04fd-4d79-a2b3-270e4f3384f8-kube-api-access-tl7v5\") pod \"neutron-operator-controller-manager-69b956fbf6-8n98x\" (UID: \"7e2d1e8d-04fd-4d79-a2b3-270e4f3384f8\") " pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.837394 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2gq8\" (UniqueName: \"kubernetes.io/projected/02615b46-f027-4aef-b28e-8c4b1c7e1d21-kube-api-access-z2gq8\") pod \"manila-operator-controller-manager-7cb48dbc-bq7vc\" (UID: \"02615b46-f027-4aef-b28e-8c4b1c7e1d21\") " pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.837478 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xkb9\" (UniqueName: \"kubernetes.io/projected/62b99e9f-d5e1-4cbd-9d75-e9f9938d0a0a-kube-api-access-8xkb9\") pod \"keystone-operator-controller-manager-57c9cdcf57-wjk8j\" (UID: \"62b99e9f-d5e1-4cbd-9d75-e9f9938d0a0a\") " pod="openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.837528 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rzpz\" (UniqueName: \"kubernetes.io/projected/e140b4bd-2692-432a-8f4c-78f4462df844-kube-api-access-7rzpz\") pod \"nova-operator-controller-manager-6c9b57c67-t6647\" (UID: \"e140b4bd-2692-432a-8f4c-78f4462df844\") " pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.841553 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-46s27" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.860023 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.874151 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xkb9\" (UniqueName: \"kubernetes.io/projected/62b99e9f-d5e1-4cbd-9d75-e9f9938d0a0a-kube-api-access-8xkb9\") pod \"keystone-operator-controller-manager-57c9cdcf57-wjk8j\" (UID: \"62b99e9f-d5e1-4cbd-9d75-e9f9938d0a0a\") " pod="openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.875027 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl7v5\" (UniqueName: \"kubernetes.io/projected/7e2d1e8d-04fd-4d79-a2b3-270e4f3384f8-kube-api-access-tl7v5\") pod \"neutron-operator-controller-manager-69b956fbf6-8n98x\" (UID: \"7e2d1e8d-04fd-4d79-a2b3-270e4f3384f8\") " pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.876281 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.876886 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rzpz\" (UniqueName: \"kubernetes.io/projected/e140b4bd-2692-432a-8f4c-78f4462df844-kube-api-access-7rzpz\") pod \"nova-operator-controller-manager-6c9b57c67-t6647\" (UID: \"e140b4bd-2692-432a-8f4c-78f4462df844\") " pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.883948 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.891076 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.900857 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffdtw\" (UniqueName: \"kubernetes.io/projected/7562bfed-f13e-4197-8169-adbed0092212-kube-api-access-ffdtw\") pod \"mariadb-operator-controller-manager-d6c9dc5bc-9s2zz\" (UID: \"7562bfed-f13e-4197-8169-adbed0092212\") " pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.912229 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-g78zc" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.922076 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.923201 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.937599 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-p5fb8" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.943196 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwm9v\" (UniqueName: \"kubernetes.io/projected/0dcb449d-5590-431e-a8d5-7cb1f6c38a9d-kube-api-access-nwm9v\") pod \"ironic-operator-controller-manager-6f5894c49f-2djd8\" (UID: \"0dcb449d-5590-431e-a8d5-7cb1f6c38a9d\") " pod="openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.944322 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2gq8\" (UniqueName: \"kubernetes.io/projected/02615b46-f027-4aef-b28e-8c4b1c7e1d21-kube-api-access-z2gq8\") pod \"manila-operator-controller-manager-7cb48dbc-bq7vc\" (UID: \"02615b46-f027-4aef-b28e-8c4b1c7e1d21\") " pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.954868 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztbgc\" (UniqueName: \"kubernetes.io/projected/cbbe870b-89f2-4e48-989f-4491894e20e0-kube-api-access-ztbgc\") pod \"telemetry-operator-controller-manager-f589c7597-dll8n\" (UID: \"cbbe870b-89f2-4e48-989f-4491894e20e0\") " pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.954934 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnpst\" (UniqueName: \"kubernetes.io/projected/03db1017-2d31-4f66-a1c6-a52297484016-kube-api-access-hnpst\") pod \"openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8\" (UID: \"03db1017-2d31-4f66-a1c6-a52297484016\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.954980 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03db1017-2d31-4f66-a1c6-a52297484016-cert\") pod \"openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8\" (UID: \"03db1017-2d31-4f66-a1c6-a52297484016\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.955055 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsv8r\" (UniqueName: \"kubernetes.io/projected/80adcd74-7072-48aa-9f4e-e1896c5fe81c-kube-api-access-nsv8r\") pod \"ovn-operator-controller-manager-c968bb45-zttxz\" (UID: \"80adcd74-7072-48aa-9f4e-e1896c5fe81c\") " pod="openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.955103 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9547\" (UniqueName: \"kubernetes.io/projected/b1c1ec64-da04-49e4-8eff-a9d020390333-kube-api-access-g9547\") pod \"placement-operator-controller-manager-66f6d6849b-njlwf\" (UID: \"b1c1ec64-da04-49e4-8eff-a9d020390333\") " pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.955127 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbdr2\" (UniqueName: \"kubernetes.io/projected/a0f015e6-bbe2-4fba-ac6a-3dbb6a818ad3-kube-api-access-hbdr2\") pod \"swift-operator-controller-manager-76d5577b-pm2s4\" (UID: \"a0f015e6-bbe2-4fba-ac6a-3dbb6a818ad3\") " pod="openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.956338 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wm2s\" (UniqueName: \"kubernetes.io/projected/049068f2-65f7-46d1-a0b1-d31fd9efb7b3-kube-api-access-9wm2s\") pod \"octavia-operator-controller-manager-69f59f9d8-pvg66\" (UID: \"049068f2-65f7-46d1-a0b1-d31fd9efb7b3\") " pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.956391 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnvd8\" (UniqueName: \"kubernetes.io/projected/e43df54a-23a1-4f04-947d-5d76ee8334e0-kube-api-access-dnvd8\") pod \"test-operator-controller-manager-6bb6dcddc-c4ppt\" (UID: \"e43df54a-23a1-4f04-947d-5d76ee8334e0\") " pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.964932 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.985146 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt"] Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.987584 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n" Oct 06 07:29:21 crc kubenswrapper[4769]: I1006 07:29:21.991835 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.018984 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq"] Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.020163 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.035528 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-rtbl9" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.040360 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq"] Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.063485 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgn4s\" (UniqueName: \"kubernetes.io/projected/d925692d-8bc4-4013-9e6e-98f43c9cf89d-kube-api-access-hgn4s\") pod \"watcher-operator-controller-manager-5d98cc5575-fjmzq\" (UID: \"d925692d-8bc4-4013-9e6e-98f43c9cf89d\") " pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.063545 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztbgc\" (UniqueName: \"kubernetes.io/projected/cbbe870b-89f2-4e48-989f-4491894e20e0-kube-api-access-ztbgc\") pod \"telemetry-operator-controller-manager-f589c7597-dll8n\" (UID: \"cbbe870b-89f2-4e48-989f-4491894e20e0\") " pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.063576 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnpst\" (UniqueName: \"kubernetes.io/projected/03db1017-2d31-4f66-a1c6-a52297484016-kube-api-access-hnpst\") pod \"openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8\" (UID: \"03db1017-2d31-4f66-a1c6-a52297484016\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.063616 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03db1017-2d31-4f66-a1c6-a52297484016-cert\") pod \"openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8\" (UID: \"03db1017-2d31-4f66-a1c6-a52297484016\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.063652 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsv8r\" (UniqueName: \"kubernetes.io/projected/80adcd74-7072-48aa-9f4e-e1896c5fe81c-kube-api-access-nsv8r\") pod \"ovn-operator-controller-manager-c968bb45-zttxz\" (UID: \"80adcd74-7072-48aa-9f4e-e1896c5fe81c\") " pod="openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.063673 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9547\" (UniqueName: \"kubernetes.io/projected/b1c1ec64-da04-49e4-8eff-a9d020390333-kube-api-access-g9547\") pod \"placement-operator-controller-manager-66f6d6849b-njlwf\" (UID: \"b1c1ec64-da04-49e4-8eff-a9d020390333\") " pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.063697 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbdr2\" (UniqueName: \"kubernetes.io/projected/a0f015e6-bbe2-4fba-ac6a-3dbb6a818ad3-kube-api-access-hbdr2\") pod \"swift-operator-controller-manager-76d5577b-pm2s4\" (UID: \"a0f015e6-bbe2-4fba-ac6a-3dbb6a818ad3\") " pod="openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.063734 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wm2s\" (UniqueName: \"kubernetes.io/projected/049068f2-65f7-46d1-a0b1-d31fd9efb7b3-kube-api-access-9wm2s\") pod \"octavia-operator-controller-manager-69f59f9d8-pvg66\" (UID: \"049068f2-65f7-46d1-a0b1-d31fd9efb7b3\") " pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.063762 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnvd8\" (UniqueName: \"kubernetes.io/projected/e43df54a-23a1-4f04-947d-5d76ee8334e0-kube-api-access-dnvd8\") pod \"test-operator-controller-manager-6bb6dcddc-c4ppt\" (UID: \"e43df54a-23a1-4f04-947d-5d76ee8334e0\") " pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt" Oct 06 07:29:22 crc kubenswrapper[4769]: E1006 07:29:22.064455 4769 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 06 07:29:22 crc kubenswrapper[4769]: E1006 07:29:22.064505 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03db1017-2d31-4f66-a1c6-a52297484016-cert podName:03db1017-2d31-4f66-a1c6-a52297484016 nodeName:}" failed. No retries permitted until 2025-10-06 07:29:22.564490913 +0000 UTC m=+759.088772050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/03db1017-2d31-4f66-a1c6-a52297484016-cert") pod "openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" (UID: "03db1017-2d31-4f66-a1c6-a52297484016") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.089544 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.090773 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.103022 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztbgc\" (UniqueName: \"kubernetes.io/projected/cbbe870b-89f2-4e48-989f-4491894e20e0-kube-api-access-ztbgc\") pod \"telemetry-operator-controller-manager-f589c7597-dll8n\" (UID: \"cbbe870b-89f2-4e48-989f-4491894e20e0\") " pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.117364 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9547\" (UniqueName: \"kubernetes.io/projected/b1c1ec64-da04-49e4-8eff-a9d020390333-kube-api-access-g9547\") pod \"placement-operator-controller-manager-66f6d6849b-njlwf\" (UID: \"b1c1ec64-da04-49e4-8eff-a9d020390333\") " pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.118178 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsv8r\" (UniqueName: \"kubernetes.io/projected/80adcd74-7072-48aa-9f4e-e1896c5fe81c-kube-api-access-nsv8r\") pod \"ovn-operator-controller-manager-c968bb45-zttxz\" (UID: \"80adcd74-7072-48aa-9f4e-e1896c5fe81c\") " pod="openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.129974 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wm2s\" (UniqueName: \"kubernetes.io/projected/049068f2-65f7-46d1-a0b1-d31fd9efb7b3-kube-api-access-9wm2s\") pod \"octavia-operator-controller-manager-69f59f9d8-pvg66\" (UID: \"049068f2-65f7-46d1-a0b1-d31fd9efb7b3\") " pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.153592 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.154033 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnpst\" (UniqueName: \"kubernetes.io/projected/03db1017-2d31-4f66-a1c6-a52297484016-kube-api-access-hnpst\") pod \"openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8\" (UID: \"03db1017-2d31-4f66-a1c6-a52297484016\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.155802 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbdr2\" (UniqueName: \"kubernetes.io/projected/a0f015e6-bbe2-4fba-ac6a-3dbb6a818ad3-kube-api-access-hbdr2\") pod \"swift-operator-controller-manager-76d5577b-pm2s4\" (UID: \"a0f015e6-bbe2-4fba-ac6a-3dbb6a818ad3\") " pod="openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.172184 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgn4s\" (UniqueName: \"kubernetes.io/projected/d925692d-8bc4-4013-9e6e-98f43c9cf89d-kube-api-access-hgn4s\") pod \"watcher-operator-controller-manager-5d98cc5575-fjmzq\" (UID: \"d925692d-8bc4-4013-9e6e-98f43c9cf89d\") " pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.172296 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d27cfd7b-7bb0-462e-b6c1-c7537b34511a-cert\") pod \"infra-operator-controller-manager-84788b6bc5-q7mbs\" (UID: \"d27cfd7b-7bb0-462e-b6c1-c7537b34511a\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.177114 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.179827 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnvd8\" (UniqueName: \"kubernetes.io/projected/e43df54a-23a1-4f04-947d-5d76ee8334e0-kube-api-access-dnvd8\") pod \"test-operator-controller-manager-6bb6dcddc-c4ppt\" (UID: \"e43df54a-23a1-4f04-947d-5d76ee8334e0\") " pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.191752 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d27cfd7b-7bb0-462e-b6c1-c7537b34511a-cert\") pod \"infra-operator-controller-manager-84788b6bc5-q7mbs\" (UID: \"d27cfd7b-7bb0-462e-b6c1-c7537b34511a\") " pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.201721 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn"] Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.203404 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.209684 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-w825d" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.209881 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.216627 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgn4s\" (UniqueName: \"kubernetes.io/projected/d925692d-8bc4-4013-9e6e-98f43c9cf89d-kube-api-access-hgn4s\") pod \"watcher-operator-controller-manager-5d98cc5575-fjmzq\" (UID: \"d925692d-8bc4-4013-9e6e-98f43c9cf89d\") " pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.216943 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn"] Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.246330 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.246529 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.246587 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.250762 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6"] Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.251573 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.253187 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-srkhw" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.256447 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8e933a4c91e9e393dfdc1657ae1bf35eec6b067aa89c28817b282eafc59971e"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.256522 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://d8e933a4c91e9e393dfdc1657ae1bf35eec6b067aa89c28817b282eafc59971e" gracePeriod=600 Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.261568 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.266832 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6"] Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.272983 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a78bef32-e8ac-4828-9eac-ba7fa7a0d609-cert\") pod \"openstack-operator-controller-manager-7cfc658b9-ghwpn\" (UID: \"a78bef32-e8ac-4828-9eac-ba7fa7a0d609\") " pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.273137 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svpkw\" (UniqueName: \"kubernetes.io/projected/a78bef32-e8ac-4828-9eac-ba7fa7a0d609-kube-api-access-svpkw\") pod \"openstack-operator-controller-manager-7cfc658b9-ghwpn\" (UID: \"a78bef32-e8ac-4828-9eac-ba7fa7a0d609\") " pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.273172 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d9v5\" (UniqueName: \"kubernetes.io/projected/603e7c24-39ad-4bd2-84ea-0e652c9a829c-kube-api-access-7d9v5\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6\" (UID: \"603e7c24-39ad-4bd2-84ea-0e652c9a829c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.307720 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.316748 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.360015 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr"] Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.364789 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.376880 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svpkw\" (UniqueName: \"kubernetes.io/projected/a78bef32-e8ac-4828-9eac-ba7fa7a0d609-kube-api-access-svpkw\") pod \"openstack-operator-controller-manager-7cfc658b9-ghwpn\" (UID: \"a78bef32-e8ac-4828-9eac-ba7fa7a0d609\") " pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.376921 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d9v5\" (UniqueName: \"kubernetes.io/projected/603e7c24-39ad-4bd2-84ea-0e652c9a829c-kube-api-access-7d9v5\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6\" (UID: \"603e7c24-39ad-4bd2-84ea-0e652c9a829c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.376965 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a78bef32-e8ac-4828-9eac-ba7fa7a0d609-cert\") pod \"openstack-operator-controller-manager-7cfc658b9-ghwpn\" (UID: \"a78bef32-e8ac-4828-9eac-ba7fa7a0d609\") " pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" Oct 06 07:29:22 crc kubenswrapper[4769]: E1006 07:29:22.377085 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 06 07:29:22 crc kubenswrapper[4769]: E1006 07:29:22.377131 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a78bef32-e8ac-4828-9eac-ba7fa7a0d609-cert podName:a78bef32-e8ac-4828-9eac-ba7fa7a0d609 nodeName:}" failed. No retries permitted until 2025-10-06 07:29:22.87711701 +0000 UTC m=+759.401398157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a78bef32-e8ac-4828-9eac-ba7fa7a0d609-cert") pod "openstack-operator-controller-manager-7cfc658b9-ghwpn" (UID: "a78bef32-e8ac-4828-9eac-ba7fa7a0d609") : secret "webhook-server-cert" not found Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.398729 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.402141 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.407904 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.408639 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d9v5\" (UniqueName: \"kubernetes.io/projected/603e7c24-39ad-4bd2-84ea-0e652c9a829c-kube-api-access-7d9v5\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6\" (UID: \"603e7c24-39ad-4bd2-84ea-0e652c9a829c\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.420495 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svpkw\" (UniqueName: \"kubernetes.io/projected/a78bef32-e8ac-4828-9eac-ba7fa7a0d609-kube-api-access-svpkw\") pod \"openstack-operator-controller-manager-7cfc658b9-ghwpn\" (UID: \"a78bef32-e8ac-4828-9eac-ba7fa7a0d609\") " pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.482482 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.578856 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03db1017-2d31-4f66-a1c6-a52297484016-cert\") pod \"openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8\" (UID: \"03db1017-2d31-4f66-a1c6-a52297484016\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.585114 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/03db1017-2d31-4f66-a1c6-a52297484016-cert\") pod \"openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8\" (UID: \"03db1017-2d31-4f66-a1c6-a52297484016\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.633814 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.760263 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.887690 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a78bef32-e8ac-4828-9eac-ba7fa7a0d609-cert\") pod \"openstack-operator-controller-manager-7cfc658b9-ghwpn\" (UID: \"a78bef32-e8ac-4828-9eac-ba7fa7a0d609\") " pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" Oct 06 07:29:22 crc kubenswrapper[4769]: E1006 07:29:22.888147 4769 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 06 07:29:22 crc kubenswrapper[4769]: E1006 07:29:22.888194 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a78bef32-e8ac-4828-9eac-ba7fa7a0d609-cert podName:a78bef32-e8ac-4828-9eac-ba7fa7a0d609 nodeName:}" failed. No retries permitted until 2025-10-06 07:29:23.888180185 +0000 UTC m=+760.412461332 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a78bef32-e8ac-4828-9eac-ba7fa7a0d609-cert") pod "openstack-operator-controller-manager-7cfc658b9-ghwpn" (UID: "a78bef32-e8ac-4828-9eac-ba7fa7a0d609") : secret "webhook-server-cert" not found Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.923736 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh"] Oct 06 07:29:22 crc kubenswrapper[4769]: I1006 07:29:22.994223 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.004213 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.029533 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr" event={"ID":"2861ed50-c4c4-4ce6-84bf-975f1ca89fd7","Type":"ContainerStarted","Data":"5c91ebf6bd6f412e00375372d7e16507697baec00eb34f060baa2f630b4e2cbd"} Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.033546 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j" event={"ID":"62b99e9f-d5e1-4cbd-9d75-e9f9938d0a0a","Type":"ContainerStarted","Data":"b790fc8c604ca23f7a83f60816a9bb293e81994fd581af36381976c75e3b9171"} Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.036993 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="d8e933a4c91e9e393dfdc1657ae1bf35eec6b067aa89c28817b282eafc59971e" exitCode=0 Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.037018 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"d8e933a4c91e9e393dfdc1657ae1bf35eec6b067aa89c28817b282eafc59971e"} Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.037067 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"e27a02d72597d106a59d046fdaac952f8bf136a5f46d0da4b4605986d1c55ede"} Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.037081 4769 scope.go:117] "RemoveContainer" containerID="110402474b39777a830d4db80b735713b2c2b3181e687b4bbba006beefb9f9e9" Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.038234 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh" event={"ID":"b13c6907-c560-4be1-a0b6-5bb302f02f3e","Type":"ContainerStarted","Data":"f1b69dca0f9ba416c33d51c1c6fd3e0f71668c7b9c1f5de40e91818d273488f9"} Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.039998 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm" event={"ID":"362f2d56-78f2-4179-842a-cc3b7e77b8bf","Type":"ContainerStarted","Data":"da0e5ed9862b1f4b12e32edbfb2d64fe89e5e0b951dde1b470b41ca7123e6f71"} Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.309046 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.330140 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.336168 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.340406 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc"] Oct 06 07:29:23 crc kubenswrapper[4769]: W1006 07:29:23.355963 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02615b46_f027_4aef_b28e_8c4b1c7e1d21.slice/crio-7a2e78cf4aec21e6935e3ca62bd58f54bf02174323843e7ddc08a39f38eed31c WatchSource:0}: Error finding container 7a2e78cf4aec21e6935e3ca62bd58f54bf02174323843e7ddc08a39f38eed31c: Status 404 returned error can't find the container with id 7a2e78cf4aec21e6935e3ca62bd58f54bf02174323843e7ddc08a39f38eed31c Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.879702 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.898543 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.903281 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.908372 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a78bef32-e8ac-4828-9eac-ba7fa7a0d609-cert\") pod \"openstack-operator-controller-manager-7cfc658b9-ghwpn\" (UID: \"a78bef32-e8ac-4828-9eac-ba7fa7a0d609\") " pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.911295 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.913673 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.918096 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.919284 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a78bef32-e8ac-4828-9eac-ba7fa7a0d609-cert\") pod \"openstack-operator-controller-manager-7cfc658b9-ghwpn\" (UID: \"a78bef32-e8ac-4828-9eac-ba7fa7a0d609\") " pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.923309 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.927079 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.940003 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs"] Oct 06 07:29:23 crc kubenswrapper[4769]: I1006 07:29:23.945639 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n"] Oct 06 07:29:24 crc kubenswrapper[4769]: I1006 07:29:24.046983 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" Oct 06 07:29:24 crc kubenswrapper[4769]: I1006 07:29:24.052918 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc" event={"ID":"702ea655-638e-4db1-bf92-4531efcb7728","Type":"ContainerStarted","Data":"5eb946bb0615797f4dd798db26515832d54ba43cfe02fbb226e5dbe4ef23e10a"} Oct 06 07:29:24 crc kubenswrapper[4769]: I1006 07:29:24.054251 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv" event={"ID":"5c160dde-ed5f-4e2e-89f8-c793ee675f54","Type":"ContainerStarted","Data":"0860173d2176017b8e955d235e8ef647ccc44001b997e2ff03127bd0c69a2874"} Oct 06 07:29:24 crc kubenswrapper[4769]: I1006 07:29:24.056864 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc" event={"ID":"02615b46-f027-4aef-b28e-8c4b1c7e1d21","Type":"ContainerStarted","Data":"7a2e78cf4aec21e6935e3ca62bd58f54bf02174323843e7ddc08a39f38eed31c"} Oct 06 07:29:24 crc kubenswrapper[4769]: I1006 07:29:24.058892 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf" event={"ID":"b1c1ec64-da04-49e4-8eff-a9d020390333","Type":"ContainerStarted","Data":"67d9745d64b8f5d1151eb4fc227402f8be73e31bacaeca14fc244d8958f36c12"} Oct 06 07:29:24 crc kubenswrapper[4769]: I1006 07:29:24.260068 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66"] Oct 06 07:29:24 crc kubenswrapper[4769]: I1006 07:29:24.274206 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6"] Oct 06 07:29:24 crc kubenswrapper[4769]: I1006 07:29:24.292873 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt"] Oct 06 07:29:24 crc kubenswrapper[4769]: I1006 07:29:24.301148 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8"] Oct 06 07:29:24 crc kubenswrapper[4769]: W1006 07:29:24.946838 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82e41047_a211_4e78_89bd_3c3b6fbf17c6.slice/crio-b46f21839d311cff30c4cb180a3fcfa4ac5786f3332bb289b2bd79c67f94768e WatchSource:0}: Error finding container b46f21839d311cff30c4cb180a3fcfa4ac5786f3332bb289b2bd79c67f94768e: Status 404 returned error can't find the container with id b46f21839d311cff30c4cb180a3fcfa4ac5786f3332bb289b2bd79c67f94768e Oct 06 07:29:24 crc kubenswrapper[4769]: W1006 07:29:24.950217 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd27cfd7b_7bb0_462e_b6c1_c7537b34511a.slice/crio-92c92408322bce98a01c5862067ca2b214ac205569c9b181f3b67cc73d8276ec WatchSource:0}: Error finding container 92c92408322bce98a01c5862067ca2b214ac205569c9b181f3b67cc73d8276ec: Status 404 returned error can't find the container with id 92c92408322bce98a01c5862067ca2b214ac205569c9b181f3b67cc73d8276ec Oct 06 07:29:24 crc kubenswrapper[4769]: W1006 07:29:24.953498 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dcb449d_5590_431e_a8d5_7cb1f6c38a9d.slice/crio-fc34e4cccf69abb8a3ad47c21780e1479ffa2545f0da54d6cea6b3c681a27c73 WatchSource:0}: Error finding container fc34e4cccf69abb8a3ad47c21780e1479ffa2545f0da54d6cea6b3c681a27c73: Status 404 returned error can't find the container with id fc34e4cccf69abb8a3ad47c21780e1479ffa2545f0da54d6cea6b3c681a27c73 Oct 06 07:29:24 crc kubenswrapper[4769]: W1006 07:29:24.955389 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod603e7c24_39ad_4bd2_84ea_0e652c9a829c.slice/crio-84484a650c166e60b3df881905fa1686c618a76ed0d483a6dd5ac09bbdb159b4 WatchSource:0}: Error finding container 84484a650c166e60b3df881905fa1686c618a76ed0d483a6dd5ac09bbdb159b4: Status 404 returned error can't find the container with id 84484a650c166e60b3df881905fa1686c618a76ed0d483a6dd5ac09bbdb159b4 Oct 06 07:29:24 crc kubenswrapper[4769]: W1006 07:29:24.957250 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0f015e6_bbe2_4fba_ac6a_3dbb6a818ad3.slice/crio-6aa9456dc974ddf99e21068c478d8e0cd816a77d78073e3611b02ce43c5e1b96 WatchSource:0}: Error finding container 6aa9456dc974ddf99e21068c478d8e0cd816a77d78073e3611b02ce43c5e1b96: Status 404 returned error can't find the container with id 6aa9456dc974ddf99e21068c478d8e0cd816a77d78073e3611b02ce43c5e1b96 Oct 06 07:29:24 crc kubenswrapper[4769]: W1006 07:29:24.965816 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode140b4bd_2692_432a_8f4c_78f4462df844.slice/crio-e9ab41f9b47a611c72f2386c6f2af942f4c253743eb319c086d8d710e563fb76 WatchSource:0}: Error finding container e9ab41f9b47a611c72f2386c6f2af942f4c253743eb319c086d8d710e563fb76: Status 404 returned error can't find the container with id e9ab41f9b47a611c72f2386c6f2af942f4c253743eb319c086d8d710e563fb76 Oct 06 07:29:24 crc kubenswrapper[4769]: W1006 07:29:24.976864 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7562bfed_f13e_4197_8169_adbed0092212.slice/crio-b298d738d99e32ccd6f10b5fea6acb35189a0d2f24a797381f0d8fffe29d60d2 WatchSource:0}: Error finding container b298d738d99e32ccd6f10b5fea6acb35189a0d2f24a797381f0d8fffe29d60d2: Status 404 returned error can't find the container with id b298d738d99e32ccd6f10b5fea6acb35189a0d2f24a797381f0d8fffe29d60d2 Oct 06 07:29:25 crc kubenswrapper[4769]: I1006 07:29:25.066015 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4" event={"ID":"a0f015e6-bbe2-4fba-ac6a-3dbb6a818ad3","Type":"ContainerStarted","Data":"6aa9456dc974ddf99e21068c478d8e0cd816a77d78073e3611b02ce43c5e1b96"} Oct 06 07:29:25 crc kubenswrapper[4769]: I1006 07:29:25.067298 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" event={"ID":"d27cfd7b-7bb0-462e-b6c1-c7537b34511a","Type":"ContainerStarted","Data":"92c92408322bce98a01c5862067ca2b214ac205569c9b181f3b67cc73d8276ec"} Oct 06 07:29:25 crc kubenswrapper[4769]: I1006 07:29:25.068246 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8" event={"ID":"0dcb449d-5590-431e-a8d5-7cb1f6c38a9d","Type":"ContainerStarted","Data":"fc34e4cccf69abb8a3ad47c21780e1479ffa2545f0da54d6cea6b3c681a27c73"} Oct 06 07:29:25 crc kubenswrapper[4769]: I1006 07:29:25.069661 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz" event={"ID":"7562bfed-f13e-4197-8169-adbed0092212","Type":"ContainerStarted","Data":"b298d738d99e32ccd6f10b5fea6acb35189a0d2f24a797381f0d8fffe29d60d2"} Oct 06 07:29:25 crc kubenswrapper[4769]: I1006 07:29:25.070929 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n" event={"ID":"82e41047-a211-4e78-89bd-3c3b6fbf17c6","Type":"ContainerStarted","Data":"b46f21839d311cff30c4cb180a3fcfa4ac5786f3332bb289b2bd79c67f94768e"} Oct 06 07:29:25 crc kubenswrapper[4769]: I1006 07:29:25.071858 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647" event={"ID":"e140b4bd-2692-432a-8f4c-78f4462df844","Type":"ContainerStarted","Data":"e9ab41f9b47a611c72f2386c6f2af942f4c253743eb319c086d8d710e563fb76"} Oct 06 07:29:25 crc kubenswrapper[4769]: I1006 07:29:25.072970 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6" event={"ID":"603e7c24-39ad-4bd2-84ea-0e652c9a829c","Type":"ContainerStarted","Data":"84484a650c166e60b3df881905fa1686c618a76ed0d483a6dd5ac09bbdb159b4"} Oct 06 07:29:25 crc kubenswrapper[4769]: I1006 07:29:25.073907 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz" event={"ID":"80adcd74-7072-48aa-9f4e-e1896c5fe81c","Type":"ContainerStarted","Data":"ffc6af228a4bea764fe4c5b421f1c18d78a6d4f5646ed54095dc11aaaa7a9b72"} Oct 06 07:29:25 crc kubenswrapper[4769]: W1006 07:29:25.410569 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd925692d_8bc4_4013_9e6e_98f43c9cf89d.slice/crio-cf9f7b97f0ea331b3a3fbf18c3db5ec05f84ae963e02fc81e8ed44b8d807e77a WatchSource:0}: Error finding container cf9f7b97f0ea331b3a3fbf18c3db5ec05f84ae963e02fc81e8ed44b8d807e77a: Status 404 returned error can't find the container with id cf9f7b97f0ea331b3a3fbf18c3db5ec05f84ae963e02fc81e8ed44b8d807e77a Oct 06 07:29:25 crc kubenswrapper[4769]: W1006 07:29:25.792470 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbbe870b_89f2_4e48_989f_4491894e20e0.slice/crio-f5dd895836fa0d3cdb4db04717fc33a55b92008394326983a4c8180c497d968a WatchSource:0}: Error finding container f5dd895836fa0d3cdb4db04717fc33a55b92008394326983a4c8180c497d968a: Status 404 returned error can't find the container with id f5dd895836fa0d3cdb4db04717fc33a55b92008394326983a4c8180c497d968a Oct 06 07:29:26 crc kubenswrapper[4769]: I1006 07:29:26.081656 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x" event={"ID":"7e2d1e8d-04fd-4d79-a2b3-270e4f3384f8","Type":"ContainerStarted","Data":"54b96fc11413a319994f5de0d77dda19d7340e8f9f2c516f36a82ae8079c6bc7"} Oct 06 07:29:26 crc kubenswrapper[4769]: I1006 07:29:26.082988 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n" event={"ID":"cbbe870b-89f2-4e48-989f-4491894e20e0","Type":"ContainerStarted","Data":"f5dd895836fa0d3cdb4db04717fc33a55b92008394326983a4c8180c497d968a"} Oct 06 07:29:26 crc kubenswrapper[4769]: I1006 07:29:26.084039 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq" event={"ID":"d925692d-8bc4-4013-9e6e-98f43c9cf89d","Type":"ContainerStarted","Data":"cf9f7b97f0ea331b3a3fbf18c3db5ec05f84ae963e02fc81e8ed44b8d807e77a"} Oct 06 07:29:26 crc kubenswrapper[4769]: W1006 07:29:26.177347 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03db1017_2d31_4f66_a1c6_a52297484016.slice/crio-063260a959c034f9fc7bc6b82b3cc69d016bcf04c47804da42b5804e26abcbfe WatchSource:0}: Error finding container 063260a959c034f9fc7bc6b82b3cc69d016bcf04c47804da42b5804e26abcbfe: Status 404 returned error can't find the container with id 063260a959c034f9fc7bc6b82b3cc69d016bcf04c47804da42b5804e26abcbfe Oct 06 07:29:26 crc kubenswrapper[4769]: W1006 07:29:26.181782 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode43df54a_23a1_4f04_947d_5d76ee8334e0.slice/crio-478dd6843e5c8b6d2f28baafcaa01eabaf247a4a2eee45bda47dd8f1c87feb8b WatchSource:0}: Error finding container 478dd6843e5c8b6d2f28baafcaa01eabaf247a4a2eee45bda47dd8f1c87feb8b: Status 404 returned error can't find the container with id 478dd6843e5c8b6d2f28baafcaa01eabaf247a4a2eee45bda47dd8f1c87feb8b Oct 06 07:29:26 crc kubenswrapper[4769]: W1006 07:29:26.183335 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod049068f2_65f7_46d1_a0b1_d31fd9efb7b3.slice/crio-4f2a63e5971ac7bbe6502cf0c7ba9ef8013c9c1b6b92e2d475971815d0080ade WatchSource:0}: Error finding container 4f2a63e5971ac7bbe6502cf0c7ba9ef8013c9c1b6b92e2d475971815d0080ade: Status 404 returned error can't find the container with id 4f2a63e5971ac7bbe6502cf0c7ba9ef8013c9c1b6b92e2d475971815d0080ade Oct 06 07:29:27 crc kubenswrapper[4769]: I1006 07:29:27.094840 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" event={"ID":"03db1017-2d31-4f66-a1c6-a52297484016","Type":"ContainerStarted","Data":"063260a959c034f9fc7bc6b82b3cc69d016bcf04c47804da42b5804e26abcbfe"} Oct 06 07:29:27 crc kubenswrapper[4769]: I1006 07:29:27.096723 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66" event={"ID":"049068f2-65f7-46d1-a0b1-d31fd9efb7b3","Type":"ContainerStarted","Data":"4f2a63e5971ac7bbe6502cf0c7ba9ef8013c9c1b6b92e2d475971815d0080ade"} Oct 06 07:29:27 crc kubenswrapper[4769]: I1006 07:29:27.099341 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt" event={"ID":"e43df54a-23a1-4f04-947d-5d76ee8334e0","Type":"ContainerStarted","Data":"478dd6843e5c8b6d2f28baafcaa01eabaf247a4a2eee45bda47dd8f1c87feb8b"} Oct 06 07:29:31 crc kubenswrapper[4769]: I1006 07:29:31.740210 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8j75r"] Oct 06 07:29:31 crc kubenswrapper[4769]: I1006 07:29:31.745021 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:31 crc kubenswrapper[4769]: I1006 07:29:31.752829 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8j75r"] Oct 06 07:29:31 crc kubenswrapper[4769]: I1006 07:29:31.837059 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90135968-cce3-4256-bbbd-0690e25c2a1e-catalog-content\") pod \"redhat-marketplace-8j75r\" (UID: \"90135968-cce3-4256-bbbd-0690e25c2a1e\") " pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:31 crc kubenswrapper[4769]: I1006 07:29:31.837122 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsm6b\" (UniqueName: \"kubernetes.io/projected/90135968-cce3-4256-bbbd-0690e25c2a1e-kube-api-access-tsm6b\") pod \"redhat-marketplace-8j75r\" (UID: \"90135968-cce3-4256-bbbd-0690e25c2a1e\") " pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:31 crc kubenswrapper[4769]: I1006 07:29:31.837150 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90135968-cce3-4256-bbbd-0690e25c2a1e-utilities\") pod \"redhat-marketplace-8j75r\" (UID: \"90135968-cce3-4256-bbbd-0690e25c2a1e\") " pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:31 crc kubenswrapper[4769]: I1006 07:29:31.938756 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90135968-cce3-4256-bbbd-0690e25c2a1e-catalog-content\") pod \"redhat-marketplace-8j75r\" (UID: \"90135968-cce3-4256-bbbd-0690e25c2a1e\") " pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:31 crc kubenswrapper[4769]: I1006 07:29:31.938808 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsm6b\" (UniqueName: \"kubernetes.io/projected/90135968-cce3-4256-bbbd-0690e25c2a1e-kube-api-access-tsm6b\") pod \"redhat-marketplace-8j75r\" (UID: \"90135968-cce3-4256-bbbd-0690e25c2a1e\") " pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:31 crc kubenswrapper[4769]: I1006 07:29:31.938830 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90135968-cce3-4256-bbbd-0690e25c2a1e-utilities\") pod \"redhat-marketplace-8j75r\" (UID: \"90135968-cce3-4256-bbbd-0690e25c2a1e\") " pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:31 crc kubenswrapper[4769]: I1006 07:29:31.939366 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90135968-cce3-4256-bbbd-0690e25c2a1e-utilities\") pod \"redhat-marketplace-8j75r\" (UID: \"90135968-cce3-4256-bbbd-0690e25c2a1e\") " pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:31 crc kubenswrapper[4769]: I1006 07:29:31.939434 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90135968-cce3-4256-bbbd-0690e25c2a1e-catalog-content\") pod \"redhat-marketplace-8j75r\" (UID: \"90135968-cce3-4256-bbbd-0690e25c2a1e\") " pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:31 crc kubenswrapper[4769]: I1006 07:29:31.962519 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsm6b\" (UniqueName: \"kubernetes.io/projected/90135968-cce3-4256-bbbd-0690e25c2a1e-kube-api-access-tsm6b\") pod \"redhat-marketplace-8j75r\" (UID: \"90135968-cce3-4256-bbbd-0690e25c2a1e\") " pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:32 crc kubenswrapper[4769]: I1006 07:29:32.079210 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:34 crc kubenswrapper[4769]: I1006 07:29:34.794733 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn"] Oct 06 07:29:34 crc kubenswrapper[4769]: I1006 07:29:34.986380 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8j75r"] Oct 06 07:29:35 crc kubenswrapper[4769]: I1006 07:29:35.145530 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm" event={"ID":"362f2d56-78f2-4179-842a-cc3b7e77b8bf","Type":"ContainerStarted","Data":"14c5b5bf8e98874e98cd0f6f799964806e06c560ee622ae78416850470b2ee95"} Oct 06 07:29:35 crc kubenswrapper[4769]: I1006 07:29:35.147068 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6" event={"ID":"603e7c24-39ad-4bd2-84ea-0e652c9a829c","Type":"ContainerStarted","Data":"717bb150ce78138faf03615363d552cdd50dcecaea7439433fb7a8926058d3db"} Oct 06 07:29:35 crc kubenswrapper[4769]: I1006 07:29:35.152590 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh" event={"ID":"b13c6907-c560-4be1-a0b6-5bb302f02f3e","Type":"ContainerStarted","Data":"68ea630df2faa4d0bf0ea08d18f28213e7962460b5e154d4719dbc16d1c32a37"} Oct 06 07:29:35 crc kubenswrapper[4769]: I1006 07:29:35.154183 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" event={"ID":"a78bef32-e8ac-4828-9eac-ba7fa7a0d609","Type":"ContainerStarted","Data":"970ac83af0e7b32e5653a9245746e450e3444155fa37483e1a0d9f3eff62523e"} Oct 06 07:29:35 crc kubenswrapper[4769]: I1006 07:29:35.155518 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf" event={"ID":"b1c1ec64-da04-49e4-8eff-a9d020390333","Type":"ContainerStarted","Data":"2ad5811cd9d2b2f8403f2ffa548379e5d464c64a9d2f92e01507dd7f57e9067b"} Oct 06 07:29:35 crc kubenswrapper[4769]: I1006 07:29:35.158981 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" event={"ID":"d27cfd7b-7bb0-462e-b6c1-c7537b34511a","Type":"ContainerStarted","Data":"3d0eb9a90b3cd4f5613c78921ac48a5c5151ceb40947b53d8870108889937197"} Oct 06 07:29:35 crc kubenswrapper[4769]: I1006 07:29:35.163933 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6" podStartSLOduration=3.725531245 podStartE2EDuration="13.163918663s" podCreationTimestamp="2025-10-06 07:29:22 +0000 UTC" firstStartedPulling="2025-10-06 07:29:24.958276366 +0000 UTC m=+761.482557513" lastFinishedPulling="2025-10-06 07:29:34.396663774 +0000 UTC m=+770.920944931" observedRunningTime="2025-10-06 07:29:35.163539703 +0000 UTC m=+771.687820850" watchObservedRunningTime="2025-10-06 07:29:35.163918663 +0000 UTC m=+771.688199810" Oct 06 07:29:35 crc kubenswrapper[4769]: I1006 07:29:35.166219 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8j75r" event={"ID":"90135968-cce3-4256-bbbd-0690e25c2a1e","Type":"ContainerStarted","Data":"a5fdea5019076246552d3f1154309aa12b9b387304308ca6f06c6716b40ba420"} Oct 06 07:29:35 crc kubenswrapper[4769]: I1006 07:29:35.169249 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv" event={"ID":"5c160dde-ed5f-4e2e-89f8-c793ee675f54","Type":"ContainerStarted","Data":"71e73b7da4c0079df1a4547d61793cd8ed92d0fb616b248347345096dfbbe213"} Oct 06 07:29:35 crc kubenswrapper[4769]: I1006 07:29:35.170514 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr" event={"ID":"2861ed50-c4c4-4ce6-84bf-975f1ca89fd7","Type":"ContainerStarted","Data":"0e502330098f540801e6429713c77c71087a3aebb6128070f7a70ff9dd9aafdd"} Oct 06 07:29:35 crc kubenswrapper[4769]: I1006 07:29:35.174889 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j" event={"ID":"62b99e9f-d5e1-4cbd-9d75-e9f9938d0a0a","Type":"ContainerStarted","Data":"f0f6456676b5ce7b41ab1970079ac647cc66e3ee55edc00413ecb911f8540e64"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.201643 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq" event={"ID":"d925692d-8bc4-4013-9e6e-98f43c9cf89d","Type":"ContainerStarted","Data":"5ba66b72a5f2637763f378e2093060a174e09f6da473850195158e7eabe31878"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.212606 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt" event={"ID":"e43df54a-23a1-4f04-947d-5d76ee8334e0","Type":"ContainerStarted","Data":"1f587fb8ca3a303002406ffcadf9aeaba77b36523b2443f29248cdb7fc6c5e34"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.215295 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x" event={"ID":"7e2d1e8d-04fd-4d79-a2b3-270e4f3384f8","Type":"ContainerStarted","Data":"64ddf42c771083b25bd6468e12bb09c1f5d16a86b211e4f8f31923f2aa1326ed"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.221111 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz" event={"ID":"80adcd74-7072-48aa-9f4e-e1896c5fe81c","Type":"ContainerStarted","Data":"a3e5f29200ae68950c3dcfa2c333e82b02cc87cc6e42ef458bf5894c3d463a17"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.223344 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4" event={"ID":"a0f015e6-bbe2-4fba-ac6a-3dbb6a818ad3","Type":"ContainerStarted","Data":"9d8ec6797e1897687b2f36fbb0ff9eed95a161adf03d80ac6a2921c701f5e717"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.230422 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n" event={"ID":"cbbe870b-89f2-4e48-989f-4491894e20e0","Type":"ContainerStarted","Data":"2bb8dab9f4ce52e917ddba85e3541a33ea37664415adf58a22315bd5673dedc2"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.237488 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz" event={"ID":"7562bfed-f13e-4197-8169-adbed0092212","Type":"ContainerStarted","Data":"fd29a08f338204edd3fe9e7b57ea87d8e029ceb6e4af0a46808dad922eedf592"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.242801 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n" event={"ID":"82e41047-a211-4e78-89bd-3c3b6fbf17c6","Type":"ContainerStarted","Data":"e8aff697e9efeccbfb9eabc18cf927ae7d62a225c5cd61f5898f94497900105a"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.245539 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc" event={"ID":"02615b46-f027-4aef-b28e-8c4b1c7e1d21","Type":"ContainerStarted","Data":"72e12c27ecab0ebf8ad91234c29eeb5df7c05fbbbc66ee4fffa48a4bba27f502"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.248599 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" event={"ID":"03db1017-2d31-4f66-a1c6-a52297484016","Type":"ContainerStarted","Data":"88508baf782247b91e4af8036dea976539be1c423f52b204f35e0fea15a6b951"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.257197 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66" event={"ID":"049068f2-65f7-46d1-a0b1-d31fd9efb7b3","Type":"ContainerStarted","Data":"cb25c9e95cbc473d5a2a062ec00f2a3dd9a0987dbbd8c31a61da399507f3aaf3"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.270637 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8" event={"ID":"0dcb449d-5590-431e-a8d5-7cb1f6c38a9d","Type":"ContainerStarted","Data":"f2abc94fc7f92d192187513eef7a2904f796abed355c8d69259422d8512b4a07"} Oct 06 07:29:36 crc kubenswrapper[4769]: I1006 07:29:36.282883 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc" event={"ID":"702ea655-638e-4db1-bf92-4531efcb7728","Type":"ContainerStarted","Data":"529976e6f0f35ad88a2f8e67fa982c8fb80e98251bb95766ac77d60a516bdde7"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.291179 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x" event={"ID":"7e2d1e8d-04fd-4d79-a2b3-270e4f3384f8","Type":"ContainerStarted","Data":"0d62c52da050203d7f21171cec902e9da7affafd82a1f8d20d40da768336b499"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.291305 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.293226 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" event={"ID":"03db1017-2d31-4f66-a1c6-a52297484016","Type":"ContainerStarted","Data":"afa21dafa77463ddde037625eb11ff363b77402b24a4630daf33b831e373fa20"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.293370 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.295666 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4" event={"ID":"a0f015e6-bbe2-4fba-ac6a-3dbb6a818ad3","Type":"ContainerStarted","Data":"7c31b60255a9a4dd55ff15f005da6556bd886fdf35f679f667db1f623e040b30"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.295794 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.297600 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66" event={"ID":"049068f2-65f7-46d1-a0b1-d31fd9efb7b3","Type":"ContainerStarted","Data":"c1b3b8ce9734dde3f8b230d7ee458aa6c7e799b1c2d636c4c775cdd93fd2333a"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.297730 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.299550 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt" event={"ID":"e43df54a-23a1-4f04-947d-5d76ee8334e0","Type":"ContainerStarted","Data":"dfbd3c54f295bd5de4b9d41732b00ef37da838482227fe363e453d22390d133a"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.299656 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.301678 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" event={"ID":"a78bef32-e8ac-4828-9eac-ba7fa7a0d609","Type":"ContainerStarted","Data":"bc9e10dbe6c4dca94d1149e47d8ec4b7f75db1c6de431bd6c2f4c6cf5bd6f857"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.301716 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" event={"ID":"a78bef32-e8ac-4828-9eac-ba7fa7a0d609","Type":"ContainerStarted","Data":"d8a32f5405a122496d331d89f8c3364d617dd6f3d086663dc22ce359da263cde"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.301743 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.303550 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf" event={"ID":"b1c1ec64-da04-49e4-8eff-a9d020390333","Type":"ContainerStarted","Data":"e1b982c1bec46b95ca9300379c79b614afad7f970e0fd549b7cb470e22ce9314"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.303653 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.305245 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n" event={"ID":"cbbe870b-89f2-4e48-989f-4491894e20e0","Type":"ContainerStarted","Data":"ac1cae082e081a68bba845661c017d2d44a62b861cd3edd7384a20c41aa3bda7"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.305355 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.306953 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz" event={"ID":"7562bfed-f13e-4197-8169-adbed0092212","Type":"ContainerStarted","Data":"fdf21e7eb038772365b56a937174983451f8d032db19258a3914378bedc603e6"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.307068 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.308662 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8" event={"ID":"0dcb449d-5590-431e-a8d5-7cb1f6c38a9d","Type":"ContainerStarted","Data":"429d96309fea792ea5db27086abf264d4b229257dc4aea6476123150259ee63e"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.308779 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.310490 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv" event={"ID":"5c160dde-ed5f-4e2e-89f8-c793ee675f54","Type":"ContainerStarted","Data":"b91f77fcaf9646c89ce7f8acbc31faa4b33c88014616c55068364e6d597b64af"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.310614 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.312136 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm" event={"ID":"362f2d56-78f2-4179-842a-cc3b7e77b8bf","Type":"ContainerStarted","Data":"20da700fd4d3a5111a8dff47e17d5884be23237fe511de29497a159fb4457ea5"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.312256 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.313884 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr" event={"ID":"2861ed50-c4c4-4ce6-84bf-975f1ca89fd7","Type":"ContainerStarted","Data":"98181931f071f775a8a397c6ee85cfe290c36b8e0e435b8aae08e67773eb9035"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.313987 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.315671 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n" event={"ID":"82e41047-a211-4e78-89bd-3c3b6fbf17c6","Type":"ContainerStarted","Data":"f4f9b513037ec18366b1f164c86e6b70ceaecfe9ee69cf3e370e3e4ccc29c4c5"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.315785 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.317455 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j" event={"ID":"62b99e9f-d5e1-4cbd-9d75-e9f9938d0a0a","Type":"ContainerStarted","Data":"6261b2026b020da9afe2e9a484160abf7e9349b6738b5ab0b67f9f73eec48175"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.317565 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.319173 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc" event={"ID":"02615b46-f027-4aef-b28e-8c4b1c7e1d21","Type":"ContainerStarted","Data":"053b29e60bfb2f33ba7929c1e97de7678f5e1bdb4aab8ec37cf484c8c56fb2ed"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.319248 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.320937 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz" event={"ID":"80adcd74-7072-48aa-9f4e-e1896c5fe81c","Type":"ContainerStarted","Data":"d305584a6e60ab1574d4034930df8ea0d9e970c0541f7d55fe91a8d0093a9ba4"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.321269 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.322375 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x" podStartSLOduration=7.268415143 podStartE2EDuration="16.322362529s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:25.41265916 +0000 UTC m=+761.936940317" lastFinishedPulling="2025-10-06 07:29:34.466606556 +0000 UTC m=+770.990887703" observedRunningTime="2025-10-06 07:29:37.32204397 +0000 UTC m=+773.846325107" watchObservedRunningTime="2025-10-06 07:29:37.322362529 +0000 UTC m=+773.846643676" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.322540 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647" event={"ID":"e140b4bd-2692-432a-8f4c-78f4462df844","Type":"ContainerStarted","Data":"9d22a3b3fa5fd7ac6b0ba453590ba65a685304524a3d9c926728a154336bac8b"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.322563 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647" event={"ID":"e140b4bd-2692-432a-8f4c-78f4462df844","Type":"ContainerStarted","Data":"74ceec4a7a9523781d39ecd1d6328eeaa9ac2618f19e446b8f40526018fddb9f"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.322661 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.324704 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh" event={"ID":"b13c6907-c560-4be1-a0b6-5bb302f02f3e","Type":"ContainerStarted","Data":"2b8ce61cb11a573adc902da311c25146eaaad14ae171c825001953bc53789293"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.324816 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.326546 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" event={"ID":"d27cfd7b-7bb0-462e-b6c1-c7537b34511a","Type":"ContainerStarted","Data":"44f2448120d852c838002ad8aa4a237727540fb996c7b95f3cdfa3110cd14baa"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.326717 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.328398 4769 generic.go:334] "Generic (PLEG): container finished" podID="90135968-cce3-4256-bbbd-0690e25c2a1e" containerID="fad6abc420bcbf81eeb4d3e342f63277564df3052411b78bde7bcd84c7de962d" exitCode=0 Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.328620 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8j75r" event={"ID":"90135968-cce3-4256-bbbd-0690e25c2a1e","Type":"ContainerDied","Data":"fad6abc420bcbf81eeb4d3e342f63277564df3052411b78bde7bcd84c7de962d"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.330314 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc" event={"ID":"702ea655-638e-4db1-bf92-4531efcb7728","Type":"ContainerStarted","Data":"55d8b8806e847cd7f0ec0d44c51bb04c35159fa3314bd29e2693d544831b034d"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.330762 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.332848 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq" event={"ID":"d925692d-8bc4-4013-9e6e-98f43c9cf89d","Type":"ContainerStarted","Data":"6be04c9ad0bdaf50bd1fae88f3832dff7b8a9788c0afad0be442e99bfb553bfa"} Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.333008 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.349581 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv" podStartSLOduration=7.176205852 podStartE2EDuration="16.349564223s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:23.34809985 +0000 UTC m=+759.872381007" lastFinishedPulling="2025-10-06 07:29:32.521458231 +0000 UTC m=+769.045739378" observedRunningTime="2025-10-06 07:29:37.344601877 +0000 UTC m=+773.868883024" watchObservedRunningTime="2025-10-06 07:29:37.349564223 +0000 UTC m=+773.873845370" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.373369 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf" podStartSLOduration=7.198747327 podStartE2EDuration="16.373354643s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:23.347322389 +0000 UTC m=+759.871603536" lastFinishedPulling="2025-10-06 07:29:32.521929695 +0000 UTC m=+769.046210852" observedRunningTime="2025-10-06 07:29:37.371066591 +0000 UTC m=+773.895347738" watchObservedRunningTime="2025-10-06 07:29:37.373354643 +0000 UTC m=+773.897635790" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.430867 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n" podStartSLOduration=6.980903763 podStartE2EDuration="16.430852006s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:24.95695805 +0000 UTC m=+761.481239197" lastFinishedPulling="2025-10-06 07:29:34.406906293 +0000 UTC m=+770.931187440" observedRunningTime="2025-10-06 07:29:37.411803975 +0000 UTC m=+773.936085122" watchObservedRunningTime="2025-10-06 07:29:37.430852006 +0000 UTC m=+773.955133143" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.461877 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8" podStartSLOduration=6.983369658 podStartE2EDuration="16.461852593s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:24.956699353 +0000 UTC m=+761.480980500" lastFinishedPulling="2025-10-06 07:29:34.435182278 +0000 UTC m=+770.959463435" observedRunningTime="2025-10-06 07:29:37.434988728 +0000 UTC m=+773.959269885" watchObservedRunningTime="2025-10-06 07:29:37.461852593 +0000 UTC m=+773.986133740" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.462744 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n" podStartSLOduration=7.822031399 podStartE2EDuration="16.462739067s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:25.795435585 +0000 UTC m=+762.319716732" lastFinishedPulling="2025-10-06 07:29:34.436143253 +0000 UTC m=+770.960424400" observedRunningTime="2025-10-06 07:29:37.457498274 +0000 UTC m=+773.981779421" watchObservedRunningTime="2025-10-06 07:29:37.462739067 +0000 UTC m=+773.987020214" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.487021 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm" podStartSLOduration=8.773394731 podStartE2EDuration="16.487002431s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:22.999271121 +0000 UTC m=+759.523552268" lastFinishedPulling="2025-10-06 07:29:30.712878821 +0000 UTC m=+767.237159968" observedRunningTime="2025-10-06 07:29:37.485995804 +0000 UTC m=+774.010276951" watchObservedRunningTime="2025-10-06 07:29:37.487002431 +0000 UTC m=+774.011283578" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.519360 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz" podStartSLOduration=7.155006532 podStartE2EDuration="16.519340505s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:24.980319518 +0000 UTC m=+761.504600665" lastFinishedPulling="2025-10-06 07:29:34.344653471 +0000 UTC m=+770.868934638" observedRunningTime="2025-10-06 07:29:37.51549705 +0000 UTC m=+774.039778197" watchObservedRunningTime="2025-10-06 07:29:37.519340505 +0000 UTC m=+774.043621652" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.558111 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" podStartSLOduration=15.558074754 podStartE2EDuration="15.558074754s" podCreationTimestamp="2025-10-06 07:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:29:37.547629619 +0000 UTC m=+774.071910766" watchObservedRunningTime="2025-10-06 07:29:37.558074754 +0000 UTC m=+774.082355901" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.574362 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz" podStartSLOduration=7.157943453 podStartE2EDuration="16.57434557s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:24.980278837 +0000 UTC m=+761.504559984" lastFinishedPulling="2025-10-06 07:29:34.396680944 +0000 UTC m=+770.920962101" observedRunningTime="2025-10-06 07:29:37.572991863 +0000 UTC m=+774.097273010" watchObservedRunningTime="2025-10-06 07:29:37.57434557 +0000 UTC m=+774.098626717" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.595817 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc" podStartSLOduration=5.519813763 podStartE2EDuration="16.595800237s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:23.360103128 +0000 UTC m=+759.884384275" lastFinishedPulling="2025-10-06 07:29:34.436089602 +0000 UTC m=+770.960370749" observedRunningTime="2025-10-06 07:29:37.592389863 +0000 UTC m=+774.116671010" watchObservedRunningTime="2025-10-06 07:29:37.595800237 +0000 UTC m=+774.120081384" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.636198 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" podStartSLOduration=8.401358401 podStartE2EDuration="16.6361603s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:26.232694582 +0000 UTC m=+762.756975729" lastFinishedPulling="2025-10-06 07:29:34.467496481 +0000 UTC m=+770.991777628" observedRunningTime="2025-10-06 07:29:37.632320105 +0000 UTC m=+774.156601262" watchObservedRunningTime="2025-10-06 07:29:37.6361603 +0000 UTC m=+774.160441447" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.657055 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr" podStartSLOduration=9.657044584 podStartE2EDuration="16.657037011s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:22.451961937 +0000 UTC m=+758.976243084" lastFinishedPulling="2025-10-06 07:29:29.451954364 +0000 UTC m=+765.976235511" observedRunningTime="2025-10-06 07:29:37.65008632 +0000 UTC m=+774.174367477" watchObservedRunningTime="2025-10-06 07:29:37.657037011 +0000 UTC m=+774.181318158" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.669533 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66" podStartSLOduration=8.435211236 podStartE2EDuration="16.669517842s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:26.232512826 +0000 UTC m=+762.756793983" lastFinishedPulling="2025-10-06 07:29:34.466819442 +0000 UTC m=+770.991100589" observedRunningTime="2025-10-06 07:29:37.664682099 +0000 UTC m=+774.188963246" watchObservedRunningTime="2025-10-06 07:29:37.669517842 +0000 UTC m=+774.193798989" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.684843 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4" podStartSLOduration=7.210002797 podStartE2EDuration="16.684812511s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:24.961942016 +0000 UTC m=+761.486223163" lastFinishedPulling="2025-10-06 07:29:34.43675174 +0000 UTC m=+770.961032877" observedRunningTime="2025-10-06 07:29:37.681771837 +0000 UTC m=+774.206052984" watchObservedRunningTime="2025-10-06 07:29:37.684812511 +0000 UTC m=+774.209093648" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.699908 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j" podStartSLOduration=7.156048302 podStartE2EDuration="16.699895493s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:22.97761858 +0000 UTC m=+759.501899727" lastFinishedPulling="2025-10-06 07:29:32.521465761 +0000 UTC m=+769.045746918" observedRunningTime="2025-10-06 07:29:37.69906726 +0000 UTC m=+774.223348407" watchObservedRunningTime="2025-10-06 07:29:37.699895493 +0000 UTC m=+774.224176640" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.720404 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt" podStartSLOduration=8.486331403 podStartE2EDuration="16.720388993s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:26.232526506 +0000 UTC m=+762.756807653" lastFinishedPulling="2025-10-06 07:29:34.466584096 +0000 UTC m=+770.990865243" observedRunningTime="2025-10-06 07:29:37.718023418 +0000 UTC m=+774.242304565" watchObservedRunningTime="2025-10-06 07:29:37.720388993 +0000 UTC m=+774.244670140" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.733567 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq" podStartSLOduration=7.738833237 podStartE2EDuration="16.733552013s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:25.412019353 +0000 UTC m=+761.936300500" lastFinishedPulling="2025-10-06 07:29:34.406738119 +0000 UTC m=+770.931019276" observedRunningTime="2025-10-06 07:29:37.729565564 +0000 UTC m=+774.253846711" watchObservedRunningTime="2025-10-06 07:29:37.733552013 +0000 UTC m=+774.257833160" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.751393 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" podStartSLOduration=7.278834668 podStartE2EDuration="16.7513786s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:24.964130236 +0000 UTC m=+761.488411393" lastFinishedPulling="2025-10-06 07:29:34.436674178 +0000 UTC m=+770.960955325" observedRunningTime="2025-10-06 07:29:37.746026674 +0000 UTC m=+774.270307821" watchObservedRunningTime="2025-10-06 07:29:37.7513786 +0000 UTC m=+774.275659747" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.766870 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh" podStartSLOduration=9.028666383 podStartE2EDuration="16.766856094s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:22.947748403 +0000 UTC m=+759.472029550" lastFinishedPulling="2025-10-06 07:29:30.685938074 +0000 UTC m=+767.210219261" observedRunningTime="2025-10-06 07:29:37.763164273 +0000 UTC m=+774.287445420" watchObservedRunningTime="2025-10-06 07:29:37.766856094 +0000 UTC m=+774.291137241" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.785111 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc" podStartSLOduration=7.644051414 podStartE2EDuration="16.785092512s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:23.380527766 +0000 UTC m=+759.904808913" lastFinishedPulling="2025-10-06 07:29:32.521568854 +0000 UTC m=+769.045850011" observedRunningTime="2025-10-06 07:29:37.781676969 +0000 UTC m=+774.305958116" watchObservedRunningTime="2025-10-06 07:29:37.785092512 +0000 UTC m=+774.309373659" Oct 06 07:29:37 crc kubenswrapper[4769]: I1006 07:29:37.799981 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647" podStartSLOduration=7.334865241 podStartE2EDuration="16.799964399s" podCreationTimestamp="2025-10-06 07:29:21 +0000 UTC" firstStartedPulling="2025-10-06 07:29:24.972060633 +0000 UTC m=+761.496341780" lastFinishedPulling="2025-10-06 07:29:34.437159781 +0000 UTC m=+770.961440938" observedRunningTime="2025-10-06 07:29:37.795693852 +0000 UTC m=+774.319974999" watchObservedRunningTime="2025-10-06 07:29:37.799964399 +0000 UTC m=+774.324245546" Oct 06 07:29:38 crc kubenswrapper[4769]: I1006 07:29:38.348993 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8j75r" event={"ID":"90135968-cce3-4256-bbbd-0690e25c2a1e","Type":"ContainerStarted","Data":"a4c54210ee1e155f789433cf7f2e69ee1b7ffdb5f7a160b6b8ca8ae68ef30e84"} Oct 06 07:29:39 crc kubenswrapper[4769]: I1006 07:29:39.359825 4769 generic.go:334] "Generic (PLEG): container finished" podID="90135968-cce3-4256-bbbd-0690e25c2a1e" containerID="a4c54210ee1e155f789433cf7f2e69ee1b7ffdb5f7a160b6b8ca8ae68ef30e84" exitCode=0 Oct 06 07:29:39 crc kubenswrapper[4769]: I1006 07:29:39.359923 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8j75r" event={"ID":"90135968-cce3-4256-bbbd-0690e25c2a1e","Type":"ContainerDied","Data":"a4c54210ee1e155f789433cf7f2e69ee1b7ffdb5f7a160b6b8ca8ae68ef30e84"} Oct 06 07:29:39 crc kubenswrapper[4769]: I1006 07:29:39.360413 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8j75r" event={"ID":"90135968-cce3-4256-bbbd-0690e25c2a1e","Type":"ContainerStarted","Data":"448e1f67f3dc96e9600e380e09c6aa33fcdcc9654dbdcfbb624441257a49fc21"} Oct 06 07:29:39 crc kubenswrapper[4769]: I1006 07:29:39.388750 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8j75r" podStartSLOduration=6.905059782 podStartE2EDuration="8.38871848s" podCreationTimestamp="2025-10-06 07:29:31 +0000 UTC" firstStartedPulling="2025-10-06 07:29:37.330709177 +0000 UTC m=+773.854990324" lastFinishedPulling="2025-10-06 07:29:38.814367875 +0000 UTC m=+775.338649022" observedRunningTime="2025-10-06 07:29:39.379406314 +0000 UTC m=+775.903687471" watchObservedRunningTime="2025-10-06 07:29:39.38871848 +0000 UTC m=+775.912999657" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.530002 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-84mhd"] Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.531873 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.548221 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-84mhd"] Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.610251 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkcbg\" (UniqueName: \"kubernetes.io/projected/e7c86bb6-7682-4ad8-8e89-0d664d763096-kube-api-access-pkcbg\") pod \"redhat-operators-84mhd\" (UID: \"e7c86bb6-7682-4ad8-8e89-0d664d763096\") " pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.610365 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c86bb6-7682-4ad8-8e89-0d664d763096-catalog-content\") pod \"redhat-operators-84mhd\" (UID: \"e7c86bb6-7682-4ad8-8e89-0d664d763096\") " pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.610812 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c86bb6-7682-4ad8-8e89-0d664d763096-utilities\") pod \"redhat-operators-84mhd\" (UID: \"e7c86bb6-7682-4ad8-8e89-0d664d763096\") " pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.677010 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-84bd8f6848-9k7gr" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.712176 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkcbg\" (UniqueName: \"kubernetes.io/projected/e7c86bb6-7682-4ad8-8e89-0d664d763096-kube-api-access-pkcbg\") pod \"redhat-operators-84mhd\" (UID: \"e7c86bb6-7682-4ad8-8e89-0d664d763096\") " pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.712422 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c86bb6-7682-4ad8-8e89-0d664d763096-catalog-content\") pod \"redhat-operators-84mhd\" (UID: \"e7c86bb6-7682-4ad8-8e89-0d664d763096\") " pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.712519 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c86bb6-7682-4ad8-8e89-0d664d763096-utilities\") pod \"redhat-operators-84mhd\" (UID: \"e7c86bb6-7682-4ad8-8e89-0d664d763096\") " pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.713082 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c86bb6-7682-4ad8-8e89-0d664d763096-utilities\") pod \"redhat-operators-84mhd\" (UID: \"e7c86bb6-7682-4ad8-8e89-0d664d763096\") " pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.713264 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c86bb6-7682-4ad8-8e89-0d664d763096-catalog-content\") pod \"redhat-operators-84mhd\" (UID: \"e7c86bb6-7682-4ad8-8e89-0d664d763096\") " pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.718955 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-58d86cd59d-phrqc" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.741392 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-698456cdc6-h2gcm" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.759069 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5c497dbdb-tg9wh" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.765500 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkcbg\" (UniqueName: \"kubernetes.io/projected/e7c86bb6-7682-4ad8-8e89-0d664d763096-kube-api-access-pkcbg\") pod \"redhat-operators-84mhd\" (UID: \"e7c86bb6-7682-4ad8-8e89-0d664d763096\") " pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.784258 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6675647785-zwnqv" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.859868 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.893061 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-57c9cdcf57-wjk8j" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.991217 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-5b974f6766-q767n" Oct 06 07:29:41 crc kubenswrapper[4769]: I1006 07:29:41.996870 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-69b956fbf6-8n98x" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.080379 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.080534 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.093242 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-d6c9dc5bc-9s2zz" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.094329 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-6c9b57c67-t6647" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.139920 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.158704 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f5894c49f-2djd8" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.185320 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7cb48dbc-bq7vc" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.265680 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-66f6d6849b-njlwf" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.311040 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-c968bb45-zttxz" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.326818 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-76d5577b-pm2s4" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.347535 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-84mhd"] Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.366901 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-f589c7597-dll8n" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.394488 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84mhd" event={"ID":"e7c86bb6-7682-4ad8-8e89-0d664d763096","Type":"ContainerStarted","Data":"8d415a0aaaa5b4e4dec892521d79bd89a92e32f8de1ceb3bec24ac51d5e88cc8"} Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.404282 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-84788b6bc5-q7mbs" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.408317 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-6bb6dcddc-c4ppt" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.411816 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f59f9d8-pvg66" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.485746 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5d98cc5575-fjmzq" Oct 06 07:29:42 crc kubenswrapper[4769]: I1006 07:29:42.766971 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8" Oct 06 07:29:43 crc kubenswrapper[4769]: I1006 07:29:43.402580 4769 generic.go:334] "Generic (PLEG): container finished" podID="e7c86bb6-7682-4ad8-8e89-0d664d763096" containerID="e9b09d067e34929ffbcdd27005fd8abb4756e5c33bd621a60afc4cb7fd80cef5" exitCode=0 Oct 06 07:29:43 crc kubenswrapper[4769]: I1006 07:29:43.402684 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84mhd" event={"ID":"e7c86bb6-7682-4ad8-8e89-0d664d763096","Type":"ContainerDied","Data":"e9b09d067e34929ffbcdd27005fd8abb4756e5c33bd621a60afc4cb7fd80cef5"} Oct 06 07:29:44 crc kubenswrapper[4769]: I1006 07:29:44.053073 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cfc658b9-ghwpn" Oct 06 07:29:45 crc kubenswrapper[4769]: I1006 07:29:45.418562 4769 generic.go:334] "Generic (PLEG): container finished" podID="e7c86bb6-7682-4ad8-8e89-0d664d763096" containerID="06e9b8e4e4756223e730a2f5a590f2a6ec02818b2b8bcdc1385926688084685f" exitCode=0 Oct 06 07:29:45 crc kubenswrapper[4769]: I1006 07:29:45.418605 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84mhd" event={"ID":"e7c86bb6-7682-4ad8-8e89-0d664d763096","Type":"ContainerDied","Data":"06e9b8e4e4756223e730a2f5a590f2a6ec02818b2b8bcdc1385926688084685f"} Oct 06 07:29:46 crc kubenswrapper[4769]: I1006 07:29:46.426773 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84mhd" event={"ID":"e7c86bb6-7682-4ad8-8e89-0d664d763096","Type":"ContainerStarted","Data":"5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708"} Oct 06 07:29:46 crc kubenswrapper[4769]: I1006 07:29:46.447094 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-84mhd" podStartSLOduration=2.854069283 podStartE2EDuration="5.447066353s" podCreationTimestamp="2025-10-06 07:29:41 +0000 UTC" firstStartedPulling="2025-10-06 07:29:43.40480782 +0000 UTC m=+779.929088987" lastFinishedPulling="2025-10-06 07:29:45.99780491 +0000 UTC m=+782.522086057" observedRunningTime="2025-10-06 07:29:46.443870476 +0000 UTC m=+782.968151633" watchObservedRunningTime="2025-10-06 07:29:46.447066353 +0000 UTC m=+782.971347510" Oct 06 07:29:51 crc kubenswrapper[4769]: I1006 07:29:51.861107 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:51 crc kubenswrapper[4769]: I1006 07:29:51.861677 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:51 crc kubenswrapper[4769]: I1006 07:29:51.925247 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:52 crc kubenswrapper[4769]: I1006 07:29:52.116871 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:52 crc kubenswrapper[4769]: I1006 07:29:52.176005 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8j75r"] Oct 06 07:29:52 crc kubenswrapper[4769]: I1006 07:29:52.468553 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8j75r" podUID="90135968-cce3-4256-bbbd-0690e25c2a1e" containerName="registry-server" containerID="cri-o://448e1f67f3dc96e9600e380e09c6aa33fcdcc9654dbdcfbb624441257a49fc21" gracePeriod=2 Oct 06 07:29:52 crc kubenswrapper[4769]: I1006 07:29:52.510843 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.476959 4769 generic.go:334] "Generic (PLEG): container finished" podID="90135968-cce3-4256-bbbd-0690e25c2a1e" containerID="448e1f67f3dc96e9600e380e09c6aa33fcdcc9654dbdcfbb624441257a49fc21" exitCode=0 Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.477094 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8j75r" event={"ID":"90135968-cce3-4256-bbbd-0690e25c2a1e","Type":"ContainerDied","Data":"448e1f67f3dc96e9600e380e09c6aa33fcdcc9654dbdcfbb624441257a49fc21"} Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.477306 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8j75r" event={"ID":"90135968-cce3-4256-bbbd-0690e25c2a1e","Type":"ContainerDied","Data":"a5fdea5019076246552d3f1154309aa12b9b387304308ca6f06c6716b40ba420"} Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.477323 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5fdea5019076246552d3f1154309aa12b9b387304308ca6f06c6716b40ba420" Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.500262 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.587945 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90135968-cce3-4256-bbbd-0690e25c2a1e-utilities\") pod \"90135968-cce3-4256-bbbd-0690e25c2a1e\" (UID: \"90135968-cce3-4256-bbbd-0690e25c2a1e\") " Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.588015 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsm6b\" (UniqueName: \"kubernetes.io/projected/90135968-cce3-4256-bbbd-0690e25c2a1e-kube-api-access-tsm6b\") pod \"90135968-cce3-4256-bbbd-0690e25c2a1e\" (UID: \"90135968-cce3-4256-bbbd-0690e25c2a1e\") " Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.588041 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90135968-cce3-4256-bbbd-0690e25c2a1e-catalog-content\") pod \"90135968-cce3-4256-bbbd-0690e25c2a1e\" (UID: \"90135968-cce3-4256-bbbd-0690e25c2a1e\") " Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.588919 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90135968-cce3-4256-bbbd-0690e25c2a1e-utilities" (OuterVolumeSpecName: "utilities") pod "90135968-cce3-4256-bbbd-0690e25c2a1e" (UID: "90135968-cce3-4256-bbbd-0690e25c2a1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.593733 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90135968-cce3-4256-bbbd-0690e25c2a1e-kube-api-access-tsm6b" (OuterVolumeSpecName: "kube-api-access-tsm6b") pod "90135968-cce3-4256-bbbd-0690e25c2a1e" (UID: "90135968-cce3-4256-bbbd-0690e25c2a1e"). InnerVolumeSpecName "kube-api-access-tsm6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.599983 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90135968-cce3-4256-bbbd-0690e25c2a1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90135968-cce3-4256-bbbd-0690e25c2a1e" (UID: "90135968-cce3-4256-bbbd-0690e25c2a1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.689190 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90135968-cce3-4256-bbbd-0690e25c2a1e-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.689232 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsm6b\" (UniqueName: \"kubernetes.io/projected/90135968-cce3-4256-bbbd-0690e25c2a1e-kube-api-access-tsm6b\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:53 crc kubenswrapper[4769]: I1006 07:29:53.689244 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90135968-cce3-4256-bbbd-0690e25c2a1e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:54 crc kubenswrapper[4769]: I1006 07:29:54.485923 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8j75r" Oct 06 07:29:54 crc kubenswrapper[4769]: I1006 07:29:54.519137 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8j75r"] Oct 06 07:29:54 crc kubenswrapper[4769]: I1006 07:29:54.523573 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8j75r"] Oct 06 07:29:54 crc kubenswrapper[4769]: I1006 07:29:54.770068 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-84mhd"] Oct 06 07:29:54 crc kubenswrapper[4769]: I1006 07:29:54.770332 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-84mhd" podUID="e7c86bb6-7682-4ad8-8e89-0d664d763096" containerName="registry-server" containerID="cri-o://5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708" gracePeriod=2 Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.269416 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.411787 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkcbg\" (UniqueName: \"kubernetes.io/projected/e7c86bb6-7682-4ad8-8e89-0d664d763096-kube-api-access-pkcbg\") pod \"e7c86bb6-7682-4ad8-8e89-0d664d763096\" (UID: \"e7c86bb6-7682-4ad8-8e89-0d664d763096\") " Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.412213 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c86bb6-7682-4ad8-8e89-0d664d763096-utilities\") pod \"e7c86bb6-7682-4ad8-8e89-0d664d763096\" (UID: \"e7c86bb6-7682-4ad8-8e89-0d664d763096\") " Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.412365 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c86bb6-7682-4ad8-8e89-0d664d763096-catalog-content\") pod \"e7c86bb6-7682-4ad8-8e89-0d664d763096\" (UID: \"e7c86bb6-7682-4ad8-8e89-0d664d763096\") " Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.413501 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c86bb6-7682-4ad8-8e89-0d664d763096-utilities" (OuterVolumeSpecName: "utilities") pod "e7c86bb6-7682-4ad8-8e89-0d664d763096" (UID: "e7c86bb6-7682-4ad8-8e89-0d664d763096"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.417246 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c86bb6-7682-4ad8-8e89-0d664d763096-kube-api-access-pkcbg" (OuterVolumeSpecName: "kube-api-access-pkcbg") pod "e7c86bb6-7682-4ad8-8e89-0d664d763096" (UID: "e7c86bb6-7682-4ad8-8e89-0d664d763096"). InnerVolumeSpecName "kube-api-access-pkcbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.494330 4769 generic.go:334] "Generic (PLEG): container finished" podID="e7c86bb6-7682-4ad8-8e89-0d664d763096" containerID="5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708" exitCode=0 Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.494373 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84mhd" event={"ID":"e7c86bb6-7682-4ad8-8e89-0d664d763096","Type":"ContainerDied","Data":"5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708"} Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.494434 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84mhd" event={"ID":"e7c86bb6-7682-4ad8-8e89-0d664d763096","Type":"ContainerDied","Data":"8d415a0aaaa5b4e4dec892521d79bd89a92e32f8de1ceb3bec24ac51d5e88cc8"} Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.494470 4769 scope.go:117] "RemoveContainer" containerID="5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.494478 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84mhd" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.510900 4769 scope.go:117] "RemoveContainer" containerID="06e9b8e4e4756223e730a2f5a590f2a6ec02818b2b8bcdc1385926688084685f" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.514214 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkcbg\" (UniqueName: \"kubernetes.io/projected/e7c86bb6-7682-4ad8-8e89-0d664d763096-kube-api-access-pkcbg\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.514246 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7c86bb6-7682-4ad8-8e89-0d664d763096-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.534706 4769 scope.go:117] "RemoveContainer" containerID="e9b09d067e34929ffbcdd27005fd8abb4756e5c33bd621a60afc4cb7fd80cef5" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.534696 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c86bb6-7682-4ad8-8e89-0d664d763096-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7c86bb6-7682-4ad8-8e89-0d664d763096" (UID: "e7c86bb6-7682-4ad8-8e89-0d664d763096"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.575526 4769 scope.go:117] "RemoveContainer" containerID="5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708" Oct 06 07:29:55 crc kubenswrapper[4769]: E1006 07:29:55.576293 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708\": container with ID starting with 5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708 not found: ID does not exist" containerID="5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.576354 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708"} err="failed to get container status \"5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708\": rpc error: code = NotFound desc = could not find container \"5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708\": container with ID starting with 5aee7374bf9f583f2f55825ea0fdcee4a01a5f1e2e07d3f4735ca0bfbf618708 not found: ID does not exist" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.576387 4769 scope.go:117] "RemoveContainer" containerID="06e9b8e4e4756223e730a2f5a590f2a6ec02818b2b8bcdc1385926688084685f" Oct 06 07:29:55 crc kubenswrapper[4769]: E1006 07:29:55.576941 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e9b8e4e4756223e730a2f5a590f2a6ec02818b2b8bcdc1385926688084685f\": container with ID starting with 06e9b8e4e4756223e730a2f5a590f2a6ec02818b2b8bcdc1385926688084685f not found: ID does not exist" containerID="06e9b8e4e4756223e730a2f5a590f2a6ec02818b2b8bcdc1385926688084685f" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.576983 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e9b8e4e4756223e730a2f5a590f2a6ec02818b2b8bcdc1385926688084685f"} err="failed to get container status \"06e9b8e4e4756223e730a2f5a590f2a6ec02818b2b8bcdc1385926688084685f\": rpc error: code = NotFound desc = could not find container \"06e9b8e4e4756223e730a2f5a590f2a6ec02818b2b8bcdc1385926688084685f\": container with ID starting with 06e9b8e4e4756223e730a2f5a590f2a6ec02818b2b8bcdc1385926688084685f not found: ID does not exist" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.577011 4769 scope.go:117] "RemoveContainer" containerID="e9b09d067e34929ffbcdd27005fd8abb4756e5c33bd621a60afc4cb7fd80cef5" Oct 06 07:29:55 crc kubenswrapper[4769]: E1006 07:29:55.577321 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9b09d067e34929ffbcdd27005fd8abb4756e5c33bd621a60afc4cb7fd80cef5\": container with ID starting with e9b09d067e34929ffbcdd27005fd8abb4756e5c33bd621a60afc4cb7fd80cef5 not found: ID does not exist" containerID="e9b09d067e34929ffbcdd27005fd8abb4756e5c33bd621a60afc4cb7fd80cef5" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.577340 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9b09d067e34929ffbcdd27005fd8abb4756e5c33bd621a60afc4cb7fd80cef5"} err="failed to get container status \"e9b09d067e34929ffbcdd27005fd8abb4756e5c33bd621a60afc4cb7fd80cef5\": rpc error: code = NotFound desc = could not find container \"e9b09d067e34929ffbcdd27005fd8abb4756e5c33bd621a60afc4cb7fd80cef5\": container with ID starting with e9b09d067e34929ffbcdd27005fd8abb4756e5c33bd621a60afc4cb7fd80cef5 not found: ID does not exist" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.615278 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7c86bb6-7682-4ad8-8e89-0d664d763096-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.829322 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-84mhd"] Oct 06 07:29:55 crc kubenswrapper[4769]: I1006 07:29:55.833924 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-84mhd"] Oct 06 07:29:56 crc kubenswrapper[4769]: I1006 07:29:56.183908 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90135968-cce3-4256-bbbd-0690e25c2a1e" path="/var/lib/kubelet/pods/90135968-cce3-4256-bbbd-0690e25c2a1e/volumes" Oct 06 07:29:56 crc kubenswrapper[4769]: I1006 07:29:56.185285 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c86bb6-7682-4ad8-8e89-0d664d763096" path="/var/lib/kubelet/pods/e7c86bb6-7682-4ad8-8e89-0d664d763096/volumes" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.135455 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8"] Oct 06 07:30:00 crc kubenswrapper[4769]: E1006 07:30:00.136276 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90135968-cce3-4256-bbbd-0690e25c2a1e" containerName="extract-utilities" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.136292 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="90135968-cce3-4256-bbbd-0690e25c2a1e" containerName="extract-utilities" Oct 06 07:30:00 crc kubenswrapper[4769]: E1006 07:30:00.136338 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c86bb6-7682-4ad8-8e89-0d664d763096" containerName="extract-content" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.136349 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c86bb6-7682-4ad8-8e89-0d664d763096" containerName="extract-content" Oct 06 07:30:00 crc kubenswrapper[4769]: E1006 07:30:00.136381 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90135968-cce3-4256-bbbd-0690e25c2a1e" containerName="registry-server" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.136392 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="90135968-cce3-4256-bbbd-0690e25c2a1e" containerName="registry-server" Oct 06 07:30:00 crc kubenswrapper[4769]: E1006 07:30:00.136418 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c86bb6-7682-4ad8-8e89-0d664d763096" containerName="registry-server" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.136451 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c86bb6-7682-4ad8-8e89-0d664d763096" containerName="registry-server" Oct 06 07:30:00 crc kubenswrapper[4769]: E1006 07:30:00.136480 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c86bb6-7682-4ad8-8e89-0d664d763096" containerName="extract-utilities" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.136492 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c86bb6-7682-4ad8-8e89-0d664d763096" containerName="extract-utilities" Oct 06 07:30:00 crc kubenswrapper[4769]: E1006 07:30:00.136526 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90135968-cce3-4256-bbbd-0690e25c2a1e" containerName="extract-content" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.136536 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="90135968-cce3-4256-bbbd-0690e25c2a1e" containerName="extract-content" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.136726 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="90135968-cce3-4256-bbbd-0690e25c2a1e" containerName="registry-server" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.136742 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c86bb6-7682-4ad8-8e89-0d664d763096" containerName="registry-server" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.137240 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.143951 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.144402 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.151772 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8"] Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.194923 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2e5f95-d641-4620-8048-dee21f4141da-secret-volume\") pod \"collect-profiles-29328930-5tnc8\" (UID: \"df2e5f95-d641-4620-8048-dee21f4141da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.194964 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z5bc\" (UniqueName: \"kubernetes.io/projected/df2e5f95-d641-4620-8048-dee21f4141da-kube-api-access-7z5bc\") pod \"collect-profiles-29328930-5tnc8\" (UID: \"df2e5f95-d641-4620-8048-dee21f4141da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.195028 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2e5f95-d641-4620-8048-dee21f4141da-config-volume\") pod \"collect-profiles-29328930-5tnc8\" (UID: \"df2e5f95-d641-4620-8048-dee21f4141da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.295822 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2e5f95-d641-4620-8048-dee21f4141da-secret-volume\") pod \"collect-profiles-29328930-5tnc8\" (UID: \"df2e5f95-d641-4620-8048-dee21f4141da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.295919 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z5bc\" (UniqueName: \"kubernetes.io/projected/df2e5f95-d641-4620-8048-dee21f4141da-kube-api-access-7z5bc\") pod \"collect-profiles-29328930-5tnc8\" (UID: \"df2e5f95-d641-4620-8048-dee21f4141da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.296013 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2e5f95-d641-4620-8048-dee21f4141da-config-volume\") pod \"collect-profiles-29328930-5tnc8\" (UID: \"df2e5f95-d641-4620-8048-dee21f4141da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.298211 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2e5f95-d641-4620-8048-dee21f4141da-config-volume\") pod \"collect-profiles-29328930-5tnc8\" (UID: \"df2e5f95-d641-4620-8048-dee21f4141da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.301563 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2e5f95-d641-4620-8048-dee21f4141da-secret-volume\") pod \"collect-profiles-29328930-5tnc8\" (UID: \"df2e5f95-d641-4620-8048-dee21f4141da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.323655 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z5bc\" (UniqueName: \"kubernetes.io/projected/df2e5f95-d641-4620-8048-dee21f4141da-kube-api-access-7z5bc\") pod \"collect-profiles-29328930-5tnc8\" (UID: \"df2e5f95-d641-4620-8048-dee21f4141da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.456819 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:00 crc kubenswrapper[4769]: I1006 07:30:00.917646 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8"] Oct 06 07:30:01 crc kubenswrapper[4769]: I1006 07:30:01.551248 4769 generic.go:334] "Generic (PLEG): container finished" podID="df2e5f95-d641-4620-8048-dee21f4141da" containerID="375271b2362b330c3c2d1eb97bd28f818d7b7fb57e6d3b0c1ab9ef62c97879b9" exitCode=0 Oct 06 07:30:01 crc kubenswrapper[4769]: I1006 07:30:01.551317 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" event={"ID":"df2e5f95-d641-4620-8048-dee21f4141da","Type":"ContainerDied","Data":"375271b2362b330c3c2d1eb97bd28f818d7b7fb57e6d3b0c1ab9ef62c97879b9"} Oct 06 07:30:01 crc kubenswrapper[4769]: I1006 07:30:01.551522 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" event={"ID":"df2e5f95-d641-4620-8048-dee21f4141da","Type":"ContainerStarted","Data":"e0f019e3fb31d5a76821ba24ca5afb95c2dab245497011a5e9fae375fdda70f7"} Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.076412 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f565b7f65-c65zg"] Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.077776 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.080143 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-f8dkl" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.080547 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.081018 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.084149 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f565b7f65-c65zg"] Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.085688 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.138632 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c4b488c7f-8ckl7"] Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.141737 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.158001 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.228333 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c4b488c7f-8ckl7"] Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.229239 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brtgw\" (UniqueName: \"kubernetes.io/projected/b34ad133-6b32-4c98-9a90-c49503405aea-kube-api-access-brtgw\") pod \"dnsmasq-dns-7f565b7f65-c65zg\" (UID: \"b34ad133-6b32-4c98-9a90-c49503405aea\") " pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.229387 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f8f2c8-d323-4671-b7fb-7419051e04c7-config\") pod \"dnsmasq-dns-7c4b488c7f-8ckl7\" (UID: \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\") " pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.229597 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34ad133-6b32-4c98-9a90-c49503405aea-config\") pod \"dnsmasq-dns-7f565b7f65-c65zg\" (UID: \"b34ad133-6b32-4c98-9a90-c49503405aea\") " pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.230984 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f57x\" (UniqueName: \"kubernetes.io/projected/d7f8f2c8-d323-4671-b7fb-7419051e04c7-kube-api-access-5f57x\") pod \"dnsmasq-dns-7c4b488c7f-8ckl7\" (UID: \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\") " pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.231117 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f8f2c8-d323-4671-b7fb-7419051e04c7-dns-svc\") pod \"dnsmasq-dns-7c4b488c7f-8ckl7\" (UID: \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\") " pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.332231 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34ad133-6b32-4c98-9a90-c49503405aea-config\") pod \"dnsmasq-dns-7f565b7f65-c65zg\" (UID: \"b34ad133-6b32-4c98-9a90-c49503405aea\") " pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.332299 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f57x\" (UniqueName: \"kubernetes.io/projected/d7f8f2c8-d323-4671-b7fb-7419051e04c7-kube-api-access-5f57x\") pod \"dnsmasq-dns-7c4b488c7f-8ckl7\" (UID: \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\") " pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.332342 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f8f2c8-d323-4671-b7fb-7419051e04c7-dns-svc\") pod \"dnsmasq-dns-7c4b488c7f-8ckl7\" (UID: \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\") " pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.332368 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brtgw\" (UniqueName: \"kubernetes.io/projected/b34ad133-6b32-4c98-9a90-c49503405aea-kube-api-access-brtgw\") pod \"dnsmasq-dns-7f565b7f65-c65zg\" (UID: \"b34ad133-6b32-4c98-9a90-c49503405aea\") " pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.332399 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f8f2c8-d323-4671-b7fb-7419051e04c7-config\") pod \"dnsmasq-dns-7c4b488c7f-8ckl7\" (UID: \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\") " pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.333718 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34ad133-6b32-4c98-9a90-c49503405aea-config\") pod \"dnsmasq-dns-7f565b7f65-c65zg\" (UID: \"b34ad133-6b32-4c98-9a90-c49503405aea\") " pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.333737 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f8f2c8-d323-4671-b7fb-7419051e04c7-dns-svc\") pod \"dnsmasq-dns-7c4b488c7f-8ckl7\" (UID: \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\") " pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.333991 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f8f2c8-d323-4671-b7fb-7419051e04c7-config\") pod \"dnsmasq-dns-7c4b488c7f-8ckl7\" (UID: \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\") " pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.360439 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f57x\" (UniqueName: \"kubernetes.io/projected/d7f8f2c8-d323-4671-b7fb-7419051e04c7-kube-api-access-5f57x\") pod \"dnsmasq-dns-7c4b488c7f-8ckl7\" (UID: \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\") " pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.360441 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brtgw\" (UniqueName: \"kubernetes.io/projected/b34ad133-6b32-4c98-9a90-c49503405aea-kube-api-access-brtgw\") pod \"dnsmasq-dns-7f565b7f65-c65zg\" (UID: \"b34ad133-6b32-4c98-9a90-c49503405aea\") " pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.392655 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.519300 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.837788 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.867734 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f565b7f65-c65zg"] Oct 06 07:30:02 crc kubenswrapper[4769]: W1006 07:30:02.876930 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb34ad133_6b32_4c98_9a90_c49503405aea.slice/crio-7b19d95f1b57bb7ae3531206ca590964196b40a63e3fe8fac207cbc587465b69 WatchSource:0}: Error finding container 7b19d95f1b57bb7ae3531206ca590964196b40a63e3fe8fac207cbc587465b69: Status 404 returned error can't find the container with id 7b19d95f1b57bb7ae3531206ca590964196b40a63e3fe8fac207cbc587465b69 Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.943584 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2e5f95-d641-4620-8048-dee21f4141da-config-volume\") pod \"df2e5f95-d641-4620-8048-dee21f4141da\" (UID: \"df2e5f95-d641-4620-8048-dee21f4141da\") " Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.943678 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2e5f95-d641-4620-8048-dee21f4141da-secret-volume\") pod \"df2e5f95-d641-4620-8048-dee21f4141da\" (UID: \"df2e5f95-d641-4620-8048-dee21f4141da\") " Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.943753 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z5bc\" (UniqueName: \"kubernetes.io/projected/df2e5f95-d641-4620-8048-dee21f4141da-kube-api-access-7z5bc\") pod \"df2e5f95-d641-4620-8048-dee21f4141da\" (UID: \"df2e5f95-d641-4620-8048-dee21f4141da\") " Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.944208 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df2e5f95-d641-4620-8048-dee21f4141da-config-volume" (OuterVolumeSpecName: "config-volume") pod "df2e5f95-d641-4620-8048-dee21f4141da" (UID: "df2e5f95-d641-4620-8048-dee21f4141da"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.948343 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2e5f95-d641-4620-8048-dee21f4141da-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df2e5f95-d641-4620-8048-dee21f4141da" (UID: "df2e5f95-d641-4620-8048-dee21f4141da"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:30:02 crc kubenswrapper[4769]: I1006 07:30:02.949565 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2e5f95-d641-4620-8048-dee21f4141da-kube-api-access-7z5bc" (OuterVolumeSpecName: "kube-api-access-7z5bc") pod "df2e5f95-d641-4620-8048-dee21f4141da" (UID: "df2e5f95-d641-4620-8048-dee21f4141da"). InnerVolumeSpecName "kube-api-access-7z5bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:30:03 crc kubenswrapper[4769]: I1006 07:30:03.031092 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c4b488c7f-8ckl7"] Oct 06 07:30:03 crc kubenswrapper[4769]: I1006 07:30:03.044708 4769 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df2e5f95-d641-4620-8048-dee21f4141da-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:03 crc kubenswrapper[4769]: I1006 07:30:03.044731 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z5bc\" (UniqueName: \"kubernetes.io/projected/df2e5f95-d641-4620-8048-dee21f4141da-kube-api-access-7z5bc\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:03 crc kubenswrapper[4769]: I1006 07:30:03.044739 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2e5f95-d641-4620-8048-dee21f4141da-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:03 crc kubenswrapper[4769]: I1006 07:30:03.564731 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" event={"ID":"df2e5f95-d641-4620-8048-dee21f4141da","Type":"ContainerDied","Data":"e0f019e3fb31d5a76821ba24ca5afb95c2dab245497011a5e9fae375fdda70f7"} Oct 06 07:30:03 crc kubenswrapper[4769]: I1006 07:30:03.564763 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8" Oct 06 07:30:03 crc kubenswrapper[4769]: I1006 07:30:03.564776 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0f019e3fb31d5a76821ba24ca5afb95c2dab245497011a5e9fae375fdda70f7" Oct 06 07:30:03 crc kubenswrapper[4769]: I1006 07:30:03.566557 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" event={"ID":"d7f8f2c8-d323-4671-b7fb-7419051e04c7","Type":"ContainerStarted","Data":"43598bdeb18d1d3605797c00931cf3cf9c4fb12a191627900a82814e4029de05"} Oct 06 07:30:03 crc kubenswrapper[4769]: I1006 07:30:03.567778 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" event={"ID":"b34ad133-6b32-4c98-9a90-c49503405aea","Type":"ContainerStarted","Data":"7b19d95f1b57bb7ae3531206ca590964196b40a63e3fe8fac207cbc587465b69"} Oct 06 07:30:04 crc kubenswrapper[4769]: I1006 07:30:04.989701 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c4b488c7f-8ckl7"] Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.031525 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84c8c6fd99-24glj"] Oct 06 07:30:05 crc kubenswrapper[4769]: E1006 07:30:05.032879 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2e5f95-d641-4620-8048-dee21f4141da" containerName="collect-profiles" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.032912 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2e5f95-d641-4620-8048-dee21f4141da" containerName="collect-profiles" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.033744 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="df2e5f95-d641-4620-8048-dee21f4141da" containerName="collect-profiles" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.035699 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.040048 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84c8c6fd99-24glj"] Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.174491 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500381ee-62ba-4309-b58e-1e398c1f8ea2-config\") pod \"dnsmasq-dns-84c8c6fd99-24glj\" (UID: \"500381ee-62ba-4309-b58e-1e398c1f8ea2\") " pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.174545 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxmx\" (UniqueName: \"kubernetes.io/projected/500381ee-62ba-4309-b58e-1e398c1f8ea2-kube-api-access-4wxmx\") pod \"dnsmasq-dns-84c8c6fd99-24glj\" (UID: \"500381ee-62ba-4309-b58e-1e398c1f8ea2\") " pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.174585 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/500381ee-62ba-4309-b58e-1e398c1f8ea2-dns-svc\") pod \"dnsmasq-dns-84c8c6fd99-24glj\" (UID: \"500381ee-62ba-4309-b58e-1e398c1f8ea2\") " pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.276237 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500381ee-62ba-4309-b58e-1e398c1f8ea2-config\") pod \"dnsmasq-dns-84c8c6fd99-24glj\" (UID: \"500381ee-62ba-4309-b58e-1e398c1f8ea2\") " pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.276295 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wxmx\" (UniqueName: \"kubernetes.io/projected/500381ee-62ba-4309-b58e-1e398c1f8ea2-kube-api-access-4wxmx\") pod \"dnsmasq-dns-84c8c6fd99-24glj\" (UID: \"500381ee-62ba-4309-b58e-1e398c1f8ea2\") " pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.276332 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/500381ee-62ba-4309-b58e-1e398c1f8ea2-dns-svc\") pod \"dnsmasq-dns-84c8c6fd99-24glj\" (UID: \"500381ee-62ba-4309-b58e-1e398c1f8ea2\") " pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.277386 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500381ee-62ba-4309-b58e-1e398c1f8ea2-config\") pod \"dnsmasq-dns-84c8c6fd99-24glj\" (UID: \"500381ee-62ba-4309-b58e-1e398c1f8ea2\") " pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.279957 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/500381ee-62ba-4309-b58e-1e398c1f8ea2-dns-svc\") pod \"dnsmasq-dns-84c8c6fd99-24glj\" (UID: \"500381ee-62ba-4309-b58e-1e398c1f8ea2\") " pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.292188 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f565b7f65-c65zg"] Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.315731 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb77f6b6c-kz76s"] Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.316911 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.329757 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wxmx\" (UniqueName: \"kubernetes.io/projected/500381ee-62ba-4309-b58e-1e398c1f8ea2-kube-api-access-4wxmx\") pod \"dnsmasq-dns-84c8c6fd99-24glj\" (UID: \"500381ee-62ba-4309-b58e-1e398c1f8ea2\") " pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.343389 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb77f6b6c-kz76s"] Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.371820 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.376991 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41106e8f-850b-4dcc-a54a-8baf17de89f7-dns-svc\") pod \"dnsmasq-dns-7cb77f6b6c-kz76s\" (UID: \"41106e8f-850b-4dcc-a54a-8baf17de89f7\") " pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.377139 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq25v\" (UniqueName: \"kubernetes.io/projected/41106e8f-850b-4dcc-a54a-8baf17de89f7-kube-api-access-nq25v\") pod \"dnsmasq-dns-7cb77f6b6c-kz76s\" (UID: \"41106e8f-850b-4dcc-a54a-8baf17de89f7\") " pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.377515 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41106e8f-850b-4dcc-a54a-8baf17de89f7-config\") pod \"dnsmasq-dns-7cb77f6b6c-kz76s\" (UID: \"41106e8f-850b-4dcc-a54a-8baf17de89f7\") " pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.478585 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq25v\" (UniqueName: \"kubernetes.io/projected/41106e8f-850b-4dcc-a54a-8baf17de89f7-kube-api-access-nq25v\") pod \"dnsmasq-dns-7cb77f6b6c-kz76s\" (UID: \"41106e8f-850b-4dcc-a54a-8baf17de89f7\") " pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.478677 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41106e8f-850b-4dcc-a54a-8baf17de89f7-config\") pod \"dnsmasq-dns-7cb77f6b6c-kz76s\" (UID: \"41106e8f-850b-4dcc-a54a-8baf17de89f7\") " pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.478704 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41106e8f-850b-4dcc-a54a-8baf17de89f7-dns-svc\") pod \"dnsmasq-dns-7cb77f6b6c-kz76s\" (UID: \"41106e8f-850b-4dcc-a54a-8baf17de89f7\") " pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.479574 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41106e8f-850b-4dcc-a54a-8baf17de89f7-config\") pod \"dnsmasq-dns-7cb77f6b6c-kz76s\" (UID: \"41106e8f-850b-4dcc-a54a-8baf17de89f7\") " pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.483780 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41106e8f-850b-4dcc-a54a-8baf17de89f7-dns-svc\") pod \"dnsmasq-dns-7cb77f6b6c-kz76s\" (UID: \"41106e8f-850b-4dcc-a54a-8baf17de89f7\") " pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.498529 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq25v\" (UniqueName: \"kubernetes.io/projected/41106e8f-850b-4dcc-a54a-8baf17de89f7-kube-api-access-nq25v\") pod \"dnsmasq-dns-7cb77f6b6c-kz76s\" (UID: \"41106e8f-850b-4dcc-a54a-8baf17de89f7\") " pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.659255 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.884917 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84c8c6fd99-24glj"] Oct 06 07:30:05 crc kubenswrapper[4769]: I1006 07:30:05.903663 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.156612 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.158841 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.162302 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.162861 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.163015 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.163276 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.163459 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.163584 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9p6xc" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.164942 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.175915 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb77f6b6c-kz76s"] Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.177567 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.291211 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.291266 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4lj\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-kube-api-access-2g4lj\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.291302 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.291342 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.291374 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.291396 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.291441 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.291465 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.291506 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.291537 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.291566 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.393081 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.393123 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.393158 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.393186 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.393207 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.393243 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.393258 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g4lj\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-kube-api-access-2g4lj\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.393277 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.393306 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.393326 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.393345 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.394312 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.399890 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.400161 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.400276 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.401188 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.401397 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.402000 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.405173 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.415031 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.417253 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.422519 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g4lj\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-kube-api-access-2g4lj\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.433076 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.465629 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.467995 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.471596 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.471790 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.471896 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.472071 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-h5gd7" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.472169 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.472292 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.472398 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.486045 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.493469 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.595840 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.596157 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.596229 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c556df6a-9389-4852-b1d8-ba7bbf8bc614-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.596341 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.596388 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.596432 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.596457 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfdqt\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-kube-api-access-rfdqt\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.596493 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.596522 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.596563 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-config-data\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.597572 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c556df6a-9389-4852-b1d8-ba7bbf8bc614-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.621562 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" event={"ID":"500381ee-62ba-4309-b58e-1e398c1f8ea2","Type":"ContainerStarted","Data":"1f5958623c6b075675e99c3ea23554cdc15fbf599b7b6800086d206d5527a96e"} Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.622936 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" event={"ID":"41106e8f-850b-4dcc-a54a-8baf17de89f7","Type":"ContainerStarted","Data":"46402d7694437602fd2036470a5fb8f4420407cf0e096ae6367db089f5ea2268"} Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.700743 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c556df6a-9389-4852-b1d8-ba7bbf8bc614-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.700884 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.700923 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.700941 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c556df6a-9389-4852-b1d8-ba7bbf8bc614-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.701022 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.701158 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.701567 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.701592 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfdqt\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-kube-api-access-rfdqt\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.701599 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.701613 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.701690 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.701729 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-config-data\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.701987 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.702268 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.702349 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-config-data\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.702443 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.704313 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.705573 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c556df6a-9389-4852-b1d8-ba7bbf8bc614-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.705840 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c556df6a-9389-4852-b1d8-ba7bbf8bc614-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.707032 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.707543 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.732514 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfdqt\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-kube-api-access-rfdqt\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.739799 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.811833 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 07:30:06 crc kubenswrapper[4769]: I1006 07:30:06.936800 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 07:30:06 crc kubenswrapper[4769]: W1006 07:30:06.948714 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod654c5c70_fc54_4a56_9fb8_c1ffe32089ca.slice/crio-84e3fa8682185fdf214584b4dacb178239fcf9b262c7999309201d3eb48121b5 WatchSource:0}: Error finding container 84e3fa8682185fdf214584b4dacb178239fcf9b262c7999309201d3eb48121b5: Status 404 returned error can't find the container with id 84e3fa8682185fdf214584b4dacb178239fcf9b262c7999309201d3eb48121b5 Oct 06 07:30:07 crc kubenswrapper[4769]: I1006 07:30:07.247814 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 07:30:07 crc kubenswrapper[4769]: W1006 07:30:07.256167 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc556df6a_9389_4852_b1d8_ba7bbf8bc614.slice/crio-4a5f3aa4d95353bf100a70182fbbb491e660051421b82fa44be4615908fa66f8 WatchSource:0}: Error finding container 4a5f3aa4d95353bf100a70182fbbb491e660051421b82fa44be4615908fa66f8: Status 404 returned error can't find the container with id 4a5f3aa4d95353bf100a70182fbbb491e660051421b82fa44be4615908fa66f8 Oct 06 07:30:07 crc kubenswrapper[4769]: I1006 07:30:07.642026 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c556df6a-9389-4852-b1d8-ba7bbf8bc614","Type":"ContainerStarted","Data":"4a5f3aa4d95353bf100a70182fbbb491e660051421b82fa44be4615908fa66f8"} Oct 06 07:30:07 crc kubenswrapper[4769]: I1006 07:30:07.643807 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"654c5c70-fc54-4a56-9fb8-c1ffe32089ca","Type":"ContainerStarted","Data":"84e3fa8682185fdf214584b4dacb178239fcf9b262c7999309201d3eb48121b5"} Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.205824 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.207218 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.208778 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.212765 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-p85xm" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.225897 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.226767 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.229784 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.229835 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.244476 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.337862 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.349360 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.355092 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.355365 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-k5mxr" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.355387 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.355529 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.355975 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/687dbbb5-7929-4674-9ac4-77ec7ff8e424-secrets\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.356010 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/687dbbb5-7929-4674-9ac4-77ec7ff8e424-kolla-config\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.356050 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/687dbbb5-7929-4674-9ac4-77ec7ff8e424-config-data-generated\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.356077 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.356119 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/687dbbb5-7929-4674-9ac4-77ec7ff8e424-config-data-default\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.356136 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687dbbb5-7929-4674-9ac4-77ec7ff8e424-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.356157 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxfhh\" (UniqueName: \"kubernetes.io/projected/687dbbb5-7929-4674-9ac4-77ec7ff8e424-kube-api-access-lxfhh\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.356182 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687dbbb5-7929-4674-9ac4-77ec7ff8e424-operator-scripts\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.356200 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/687dbbb5-7929-4674-9ac4-77ec7ff8e424-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.356701 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.457771 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/687dbbb5-7929-4674-9ac4-77ec7ff8e424-secrets\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.457821 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/687dbbb5-7929-4674-9ac4-77ec7ff8e424-kolla-config\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.457863 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/687dbbb5-7929-4674-9ac4-77ec7ff8e424-config-data-generated\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.457913 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.458483 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/687dbbb5-7929-4674-9ac4-77ec7ff8e424-config-data-generated\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.458530 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/71170300-e629-4a76-8960-29bdc328edf5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.458549 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/71170300-e629-4a76-8960-29bdc328edf5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.458779 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.458908 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/687dbbb5-7929-4674-9ac4-77ec7ff8e424-kolla-config\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459037 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71170300-e629-4a76-8960-29bdc328edf5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459103 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/687dbbb5-7929-4674-9ac4-77ec7ff8e424-config-data-default\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459136 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71170300-e629-4a76-8960-29bdc328edf5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459162 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687dbbb5-7929-4674-9ac4-77ec7ff8e424-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459212 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxfhh\" (UniqueName: \"kubernetes.io/projected/687dbbb5-7929-4674-9ac4-77ec7ff8e424-kube-api-access-lxfhh\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459245 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/71170300-e629-4a76-8960-29bdc328edf5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459319 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8r5\" (UniqueName: \"kubernetes.io/projected/71170300-e629-4a76-8960-29bdc328edf5-kube-api-access-kw8r5\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459359 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687dbbb5-7929-4674-9ac4-77ec7ff8e424-operator-scripts\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459391 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/687dbbb5-7929-4674-9ac4-77ec7ff8e424-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459452 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/71170300-e629-4a76-8960-29bdc328edf5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459522 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/71170300-e629-4a76-8960-29bdc328edf5-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459550 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.459831 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/687dbbb5-7929-4674-9ac4-77ec7ff8e424-config-data-default\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.460856 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/687dbbb5-7929-4674-9ac4-77ec7ff8e424-operator-scripts\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.465915 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/687dbbb5-7929-4674-9ac4-77ec7ff8e424-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.468606 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/687dbbb5-7929-4674-9ac4-77ec7ff8e424-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.477578 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/687dbbb5-7929-4674-9ac4-77ec7ff8e424-secrets\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.479759 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxfhh\" (UniqueName: \"kubernetes.io/projected/687dbbb5-7929-4674-9ac4-77ec7ff8e424-kube-api-access-lxfhh\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.481326 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"687dbbb5-7929-4674-9ac4-77ec7ff8e424\") " pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.530368 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.561122 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/71170300-e629-4a76-8960-29bdc328edf5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.561165 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw8r5\" (UniqueName: \"kubernetes.io/projected/71170300-e629-4a76-8960-29bdc328edf5-kube-api-access-kw8r5\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.561192 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/71170300-e629-4a76-8960-29bdc328edf5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.561220 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.561236 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/71170300-e629-4a76-8960-29bdc328edf5-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.561298 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/71170300-e629-4a76-8960-29bdc328edf5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.561314 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/71170300-e629-4a76-8960-29bdc328edf5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.561336 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71170300-e629-4a76-8960-29bdc328edf5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.561357 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71170300-e629-4a76-8960-29bdc328edf5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.562966 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/71170300-e629-4a76-8960-29bdc328edf5-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.563973 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.566900 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/71170300-e629-4a76-8960-29bdc328edf5-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.569522 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/71170300-e629-4a76-8960-29bdc328edf5-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.571817 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71170300-e629-4a76-8960-29bdc328edf5-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.576154 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71170300-e629-4a76-8960-29bdc328edf5-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.576348 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/71170300-e629-4a76-8960-29bdc328edf5-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.576612 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/71170300-e629-4a76-8960-29bdc328edf5-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.588689 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw8r5\" (UniqueName: \"kubernetes.io/projected/71170300-e629-4a76-8960-29bdc328edf5-kube-api-access-kw8r5\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.605538 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"71170300-e629-4a76-8960-29bdc328edf5\") " pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.696880 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.779226 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.780276 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.786207 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.786230 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qg5rx" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.786351 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.797529 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.879296 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx9vn\" (UniqueName: \"kubernetes.io/projected/c804d714-a4d7-487d-b52a-bb2e1a47f36b-kube-api-access-sx9vn\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.879346 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c804d714-a4d7-487d-b52a-bb2e1a47f36b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.879404 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c804d714-a4d7-487d-b52a-bb2e1a47f36b-kolla-config\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.879457 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c804d714-a4d7-487d-b52a-bb2e1a47f36b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.879510 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c804d714-a4d7-487d-b52a-bb2e1a47f36b-config-data\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.980532 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx9vn\" (UniqueName: \"kubernetes.io/projected/c804d714-a4d7-487d-b52a-bb2e1a47f36b-kube-api-access-sx9vn\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.980571 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c804d714-a4d7-487d-b52a-bb2e1a47f36b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.981081 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c804d714-a4d7-487d-b52a-bb2e1a47f36b-kolla-config\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.981386 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c804d714-a4d7-487d-b52a-bb2e1a47f36b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.983587 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c804d714-a4d7-487d-b52a-bb2e1a47f36b-kolla-config\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.983715 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c804d714-a4d7-487d-b52a-bb2e1a47f36b-config-data\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.984849 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c804d714-a4d7-487d-b52a-bb2e1a47f36b-config-data\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.987354 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c804d714-a4d7-487d-b52a-bb2e1a47f36b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:09 crc kubenswrapper[4769]: I1006 07:30:09.987581 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c804d714-a4d7-487d-b52a-bb2e1a47f36b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:10 crc kubenswrapper[4769]: I1006 07:30:10.004410 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx9vn\" (UniqueName: \"kubernetes.io/projected/c804d714-a4d7-487d-b52a-bb2e1a47f36b-kube-api-access-sx9vn\") pod \"memcached-0\" (UID: \"c804d714-a4d7-487d-b52a-bb2e1a47f36b\") " pod="openstack/memcached-0" Oct 06 07:30:10 crc kubenswrapper[4769]: I1006 07:30:10.108848 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 06 07:30:10 crc kubenswrapper[4769]: I1006 07:30:10.122866 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 06 07:30:10 crc kubenswrapper[4769]: W1006 07:30:10.147792 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod687dbbb5_7929_4674_9ac4_77ec7ff8e424.slice/crio-1df31e553f2138650d7ff186c606d3e19a27d7dd4bfcd07f1462fd02bf89e7f2 WatchSource:0}: Error finding container 1df31e553f2138650d7ff186c606d3e19a27d7dd4bfcd07f1462fd02bf89e7f2: Status 404 returned error can't find the container with id 1df31e553f2138650d7ff186c606d3e19a27d7dd4bfcd07f1462fd02bf89e7f2 Oct 06 07:30:10 crc kubenswrapper[4769]: I1006 07:30:10.347356 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 06 07:30:10 crc kubenswrapper[4769]: W1006 07:30:10.363816 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71170300_e629_4a76_8960_29bdc328edf5.slice/crio-e487946e03b28c135a1ab91f22d2b040fb97774ad6c788cc1082579bfc8b640b WatchSource:0}: Error finding container e487946e03b28c135a1ab91f22d2b040fb97774ad6c788cc1082579bfc8b640b: Status 404 returned error can't find the container with id e487946e03b28c135a1ab91f22d2b040fb97774ad6c788cc1082579bfc8b640b Oct 06 07:30:10 crc kubenswrapper[4769]: I1006 07:30:10.590192 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 06 07:30:10 crc kubenswrapper[4769]: I1006 07:30:10.675112 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"687dbbb5-7929-4674-9ac4-77ec7ff8e424","Type":"ContainerStarted","Data":"1df31e553f2138650d7ff186c606d3e19a27d7dd4bfcd07f1462fd02bf89e7f2"} Oct 06 07:30:10 crc kubenswrapper[4769]: I1006 07:30:10.676232 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"71170300-e629-4a76-8960-29bdc328edf5","Type":"ContainerStarted","Data":"e487946e03b28c135a1ab91f22d2b040fb97774ad6c788cc1082579bfc8b640b"} Oct 06 07:30:10 crc kubenswrapper[4769]: I1006 07:30:10.677818 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c804d714-a4d7-487d-b52a-bb2e1a47f36b","Type":"ContainerStarted","Data":"8847cca2cf9c335a86431efd3e470a853dfeb6578ca491ee7f2ee35626666ab3"} Oct 06 07:30:11 crc kubenswrapper[4769]: I1006 07:30:11.730912 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 07:30:11 crc kubenswrapper[4769]: I1006 07:30:11.731938 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 07:30:11 crc kubenswrapper[4769]: I1006 07:30:11.733674 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-r2flg" Oct 06 07:30:11 crc kubenswrapper[4769]: I1006 07:30:11.744730 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 07:30:11 crc kubenswrapper[4769]: I1006 07:30:11.831182 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snmqn\" (UniqueName: \"kubernetes.io/projected/d657d340-ec67-493c-8403-4bffd42ae0b3-kube-api-access-snmqn\") pod \"kube-state-metrics-0\" (UID: \"d657d340-ec67-493c-8403-4bffd42ae0b3\") " pod="openstack/kube-state-metrics-0" Oct 06 07:30:11 crc kubenswrapper[4769]: I1006 07:30:11.932086 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snmqn\" (UniqueName: \"kubernetes.io/projected/d657d340-ec67-493c-8403-4bffd42ae0b3-kube-api-access-snmqn\") pod \"kube-state-metrics-0\" (UID: \"d657d340-ec67-493c-8403-4bffd42ae0b3\") " pod="openstack/kube-state-metrics-0" Oct 06 07:30:11 crc kubenswrapper[4769]: I1006 07:30:11.955401 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snmqn\" (UniqueName: \"kubernetes.io/projected/d657d340-ec67-493c-8403-4bffd42ae0b3-kube-api-access-snmqn\") pod \"kube-state-metrics-0\" (UID: \"d657d340-ec67-493c-8403-4bffd42ae0b3\") " pod="openstack/kube-state-metrics-0" Oct 06 07:30:12 crc kubenswrapper[4769]: I1006 07:30:12.054937 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 07:30:12 crc kubenswrapper[4769]: I1006 07:30:12.578138 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 07:30:12 crc kubenswrapper[4769]: W1006 07:30:12.600987 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd657d340_ec67_493c_8403_4bffd42ae0b3.slice/crio-c791c199b6292586f4c2049fdb1fade446abceb37d3f0ad249cc2ecbc306bac1 WatchSource:0}: Error finding container c791c199b6292586f4c2049fdb1fade446abceb37d3f0ad249cc2ecbc306bac1: Status 404 returned error can't find the container with id c791c199b6292586f4c2049fdb1fade446abceb37d3f0ad249cc2ecbc306bac1 Oct 06 07:30:12 crc kubenswrapper[4769]: I1006 07:30:12.701430 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d657d340-ec67-493c-8403-4bffd42ae0b3","Type":"ContainerStarted","Data":"c791c199b6292586f4c2049fdb1fade446abceb37d3f0ad249cc2ecbc306bac1"} Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.147996 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rdj69"] Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.149523 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.156496 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-r9pfw" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.157528 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.157743 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.162118 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rdj69"] Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.167430 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-4qmzc"] Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.168923 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.199390 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v6fv\" (UniqueName: \"kubernetes.io/projected/7666a29e-0c83-4099-ae6d-1fc333d3c630-kube-api-access-9v6fv\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.199622 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx8d4\" (UniqueName: \"kubernetes.io/projected/530bb6cc-e4fa-42bf-88aa-38020d3b5513-kube-api-access-xx8d4\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.199737 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7666a29e-0c83-4099-ae6d-1fc333d3c630-scripts\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.199824 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/530bb6cc-e4fa-42bf-88aa-38020d3b5513-scripts\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.199899 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/530bb6cc-e4fa-42bf-88aa-38020d3b5513-var-run\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.199963 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7666a29e-0c83-4099-ae6d-1fc333d3c630-combined-ca-bundle\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.200030 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7666a29e-0c83-4099-ae6d-1fc333d3c630-var-log-ovn\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.200103 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/530bb6cc-e4fa-42bf-88aa-38020d3b5513-var-log\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.200218 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/530bb6cc-e4fa-42bf-88aa-38020d3b5513-etc-ovs\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.200297 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7666a29e-0c83-4099-ae6d-1fc333d3c630-var-run\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.200368 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7666a29e-0c83-4099-ae6d-1fc333d3c630-ovn-controller-tls-certs\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.200535 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/530bb6cc-e4fa-42bf-88aa-38020d3b5513-var-lib\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.200629 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7666a29e-0c83-4099-ae6d-1fc333d3c630-var-run-ovn\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.203586 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4qmzc"] Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.302349 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/530bb6cc-e4fa-42bf-88aa-38020d3b5513-etc-ovs\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.302407 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7666a29e-0c83-4099-ae6d-1fc333d3c630-var-run\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.302456 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7666a29e-0c83-4099-ae6d-1fc333d3c630-ovn-controller-tls-certs\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.302481 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/530bb6cc-e4fa-42bf-88aa-38020d3b5513-var-lib\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.302552 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7666a29e-0c83-4099-ae6d-1fc333d3c630-var-run-ovn\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.302790 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v6fv\" (UniqueName: \"kubernetes.io/projected/7666a29e-0c83-4099-ae6d-1fc333d3c630-kube-api-access-9v6fv\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.302826 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx8d4\" (UniqueName: \"kubernetes.io/projected/530bb6cc-e4fa-42bf-88aa-38020d3b5513-kube-api-access-xx8d4\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.302853 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7666a29e-0c83-4099-ae6d-1fc333d3c630-scripts\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.302880 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/530bb6cc-e4fa-42bf-88aa-38020d3b5513-scripts\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.302945 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/530bb6cc-e4fa-42bf-88aa-38020d3b5513-etc-ovs\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.302998 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/530bb6cc-e4fa-42bf-88aa-38020d3b5513-var-run\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.303023 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7666a29e-0c83-4099-ae6d-1fc333d3c630-combined-ca-bundle\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.303093 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7666a29e-0c83-4099-ae6d-1fc333d3c630-var-log-ovn\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.303114 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/530bb6cc-e4fa-42bf-88aa-38020d3b5513-var-log\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.304549 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7666a29e-0c83-4099-ae6d-1fc333d3c630-var-log-ovn\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.304822 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/530bb6cc-e4fa-42bf-88aa-38020d3b5513-var-log\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.305232 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7666a29e-0c83-4099-ae6d-1fc333d3c630-scripts\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.305307 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/530bb6cc-e4fa-42bf-88aa-38020d3b5513-scripts\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.305397 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7666a29e-0c83-4099-ae6d-1fc333d3c630-var-run-ovn\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.305474 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/530bb6cc-e4fa-42bf-88aa-38020d3b5513-var-run\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.305576 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/530bb6cc-e4fa-42bf-88aa-38020d3b5513-var-lib\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.310066 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7666a29e-0c83-4099-ae6d-1fc333d3c630-combined-ca-bundle\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.312634 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7666a29e-0c83-4099-ae6d-1fc333d3c630-var-run\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.323455 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7666a29e-0c83-4099-ae6d-1fc333d3c630-ovn-controller-tls-certs\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.323503 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v6fv\" (UniqueName: \"kubernetes.io/projected/7666a29e-0c83-4099-ae6d-1fc333d3c630-kube-api-access-9v6fv\") pod \"ovn-controller-rdj69\" (UID: \"7666a29e-0c83-4099-ae6d-1fc333d3c630\") " pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.329144 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx8d4\" (UniqueName: \"kubernetes.io/projected/530bb6cc-e4fa-42bf-88aa-38020d3b5513-kube-api-access-xx8d4\") pod \"ovn-controller-ovs-4qmzc\" (UID: \"530bb6cc-e4fa-42bf-88aa-38020d3b5513\") " pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.495898 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdj69" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.506035 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.971950 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xdb5d"] Oct 06 07:30:15 crc kubenswrapper[4769]: I1006 07:30:15.982641 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.004705 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xdb5d"] Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.022850 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2389387-bd73-49fb-b119-41f04a08f615-catalog-content\") pod \"community-operators-xdb5d\" (UID: \"e2389387-bd73-49fb-b119-41f04a08f615\") " pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.022898 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb494\" (UniqueName: \"kubernetes.io/projected/e2389387-bd73-49fb-b119-41f04a08f615-kube-api-access-xb494\") pod \"community-operators-xdb5d\" (UID: \"e2389387-bd73-49fb-b119-41f04a08f615\") " pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.022940 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2389387-bd73-49fb-b119-41f04a08f615-utilities\") pod \"community-operators-xdb5d\" (UID: \"e2389387-bd73-49fb-b119-41f04a08f615\") " pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.124136 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2389387-bd73-49fb-b119-41f04a08f615-utilities\") pod \"community-operators-xdb5d\" (UID: \"e2389387-bd73-49fb-b119-41f04a08f615\") " pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.124274 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2389387-bd73-49fb-b119-41f04a08f615-catalog-content\") pod \"community-operators-xdb5d\" (UID: \"e2389387-bd73-49fb-b119-41f04a08f615\") " pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.124324 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb494\" (UniqueName: \"kubernetes.io/projected/e2389387-bd73-49fb-b119-41f04a08f615-kube-api-access-xb494\") pod \"community-operators-xdb5d\" (UID: \"e2389387-bd73-49fb-b119-41f04a08f615\") " pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.124704 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2389387-bd73-49fb-b119-41f04a08f615-utilities\") pod \"community-operators-xdb5d\" (UID: \"e2389387-bd73-49fb-b119-41f04a08f615\") " pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.124717 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2389387-bd73-49fb-b119-41f04a08f615-catalog-content\") pod \"community-operators-xdb5d\" (UID: \"e2389387-bd73-49fb-b119-41f04a08f615\") " pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.152279 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb494\" (UniqueName: \"kubernetes.io/projected/e2389387-bd73-49fb-b119-41f04a08f615-kube-api-access-xb494\") pod \"community-operators-xdb5d\" (UID: \"e2389387-bd73-49fb-b119-41f04a08f615\") " pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.336774 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.376026 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rdj69"] Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.513982 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bkkb2"] Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.531931 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.533406 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70a68a0-0a99-4cad-acf0-fec33240636a-utilities\") pod \"certified-operators-bkkb2\" (UID: \"b70a68a0-0a99-4cad-acf0-fec33240636a\") " pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.536783 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70a68a0-0a99-4cad-acf0-fec33240636a-catalog-content\") pod \"certified-operators-bkkb2\" (UID: \"b70a68a0-0a99-4cad-acf0-fec33240636a\") " pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.536840 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f4wv\" (UniqueName: \"kubernetes.io/projected/b70a68a0-0a99-4cad-acf0-fec33240636a-kube-api-access-7f4wv\") pod \"certified-operators-bkkb2\" (UID: \"b70a68a0-0a99-4cad-acf0-fec33240636a\") " pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.541027 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bkkb2"] Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.643203 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f4wv\" (UniqueName: \"kubernetes.io/projected/b70a68a0-0a99-4cad-acf0-fec33240636a-kube-api-access-7f4wv\") pod \"certified-operators-bkkb2\" (UID: \"b70a68a0-0a99-4cad-acf0-fec33240636a\") " pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.643367 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70a68a0-0a99-4cad-acf0-fec33240636a-utilities\") pod \"certified-operators-bkkb2\" (UID: \"b70a68a0-0a99-4cad-acf0-fec33240636a\") " pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.643394 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70a68a0-0a99-4cad-acf0-fec33240636a-catalog-content\") pod \"certified-operators-bkkb2\" (UID: \"b70a68a0-0a99-4cad-acf0-fec33240636a\") " pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.645755 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70a68a0-0a99-4cad-acf0-fec33240636a-catalog-content\") pod \"certified-operators-bkkb2\" (UID: \"b70a68a0-0a99-4cad-acf0-fec33240636a\") " pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.648504 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70a68a0-0a99-4cad-acf0-fec33240636a-utilities\") pod \"certified-operators-bkkb2\" (UID: \"b70a68a0-0a99-4cad-acf0-fec33240636a\") " pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.672301 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f4wv\" (UniqueName: \"kubernetes.io/projected/b70a68a0-0a99-4cad-acf0-fec33240636a-kube-api-access-7f4wv\") pod \"certified-operators-bkkb2\" (UID: \"b70a68a0-0a99-4cad-acf0-fec33240636a\") " pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.702692 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-4qmzc"] Oct 06 07:30:16 crc kubenswrapper[4769]: W1006 07:30:16.744402 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod530bb6cc_e4fa_42bf_88aa_38020d3b5513.slice/crio-f8d7eae018c7c6fcf8d85e2fe37f6a175c4d95b1fb92d33036211243aea18fdc WatchSource:0}: Error finding container f8d7eae018c7c6fcf8d85e2fe37f6a175c4d95b1fb92d33036211243aea18fdc: Status 404 returned error can't find the container with id f8d7eae018c7c6fcf8d85e2fe37f6a175c4d95b1fb92d33036211243aea18fdc Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.788365 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d657d340-ec67-493c-8403-4bffd42ae0b3","Type":"ContainerStarted","Data":"0f72a0997ed6f8c0590b272fee7dacc0a74eb6e558de7c43e3482e258fcae7e4"} Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.788668 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.789686 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4qmzc" event={"ID":"530bb6cc-e4fa-42bf-88aa-38020d3b5513","Type":"ContainerStarted","Data":"f8d7eae018c7c6fcf8d85e2fe37f6a175c4d95b1fb92d33036211243aea18fdc"} Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.792727 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdj69" event={"ID":"7666a29e-0c83-4099-ae6d-1fc333d3c630","Type":"ContainerStarted","Data":"67ffec96c229ff75cb6e090c27639fac554b59aa974d5507fa680ae5915907a9"} Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.809102 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.574161004 podStartE2EDuration="5.809082765s" podCreationTimestamp="2025-10-06 07:30:11 +0000 UTC" firstStartedPulling="2025-10-06 07:30:12.60447332 +0000 UTC m=+809.128754467" lastFinishedPulling="2025-10-06 07:30:15.839395091 +0000 UTC m=+812.363676228" observedRunningTime="2025-10-06 07:30:16.805962199 +0000 UTC m=+813.330243346" watchObservedRunningTime="2025-10-06 07:30:16.809082765 +0000 UTC m=+813.333363912" Oct 06 07:30:16 crc kubenswrapper[4769]: I1006 07:30:16.883186 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.006831 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xdb5d"] Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.025279 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.026704 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.029556 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.029663 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.036017 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.036230 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.036336 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mfsjp" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.037084 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.058437 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91209a46-4315-4dd2-91a5-7658b454d9ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.058720 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91209a46-4315-4dd2-91a5-7658b454d9ec-config\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.058857 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91209a46-4315-4dd2-91a5-7658b454d9ec-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.059001 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm7rj\" (UniqueName: \"kubernetes.io/projected/91209a46-4315-4dd2-91a5-7658b454d9ec-kube-api-access-lm7rj\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.059118 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/91209a46-4315-4dd2-91a5-7658b454d9ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.059321 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/91209a46-4315-4dd2-91a5-7658b454d9ec-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.059447 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.059717 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91209a46-4315-4dd2-91a5-7658b454d9ec-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.162630 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91209a46-4315-4dd2-91a5-7658b454d9ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.163010 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91209a46-4315-4dd2-91a5-7658b454d9ec-config\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.163039 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91209a46-4315-4dd2-91a5-7658b454d9ec-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.163076 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm7rj\" (UniqueName: \"kubernetes.io/projected/91209a46-4315-4dd2-91a5-7658b454d9ec-kube-api-access-lm7rj\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.163101 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/91209a46-4315-4dd2-91a5-7658b454d9ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.163152 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/91209a46-4315-4dd2-91a5-7658b454d9ec-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.163175 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.163189 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91209a46-4315-4dd2-91a5-7658b454d9ec-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.164371 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91209a46-4315-4dd2-91a5-7658b454d9ec-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.166760 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/91209a46-4315-4dd2-91a5-7658b454d9ec-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.167179 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.167343 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91209a46-4315-4dd2-91a5-7658b454d9ec-config\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.174414 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91209a46-4315-4dd2-91a5-7658b454d9ec-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.178183 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/91209a46-4315-4dd2-91a5-7658b454d9ec-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.179162 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/91209a46-4315-4dd2-91a5-7658b454d9ec-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.182599 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm7rj\" (UniqueName: \"kubernetes.io/projected/91209a46-4315-4dd2-91a5-7658b454d9ec-kube-api-access-lm7rj\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.199324 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"91209a46-4315-4dd2-91a5-7658b454d9ec\") " pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.362232 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.443246 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bkkb2"] Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.832444 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkkb2" event={"ID":"b70a68a0-0a99-4cad-acf0-fec33240636a","Type":"ContainerStarted","Data":"320b90eb0cf5d2173b12e726cf162e4aebbe21752277ed694801000b6a4018ae"} Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.832892 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkkb2" event={"ID":"b70a68a0-0a99-4cad-acf0-fec33240636a","Type":"ContainerStarted","Data":"695a575bb43533c254a6df023b32fd3f400bef6d0b5a51259842ae9c1115aa1d"} Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.837340 4769 generic.go:334] "Generic (PLEG): container finished" podID="e2389387-bd73-49fb-b119-41f04a08f615" containerID="48ee7f10a680bae049d9f60f2ea4667afff06ad7136abd8066e17a058072aa43" exitCode=0 Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.837963 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdb5d" event={"ID":"e2389387-bd73-49fb-b119-41f04a08f615","Type":"ContainerDied","Data":"48ee7f10a680bae049d9f60f2ea4667afff06ad7136abd8066e17a058072aa43"} Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.844403 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdb5d" event={"ID":"e2389387-bd73-49fb-b119-41f04a08f615","Type":"ContainerStarted","Data":"d67090afbecb9cf131214b2bdddd95024dbfe7e63e95336363e51bcea248834e"} Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.927452 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vh747"] Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.929034 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.936916 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 06 07:30:17 crc kubenswrapper[4769]: I1006 07:30:17.941978 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vh747"] Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.091931 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8dzd\" (UniqueName: \"kubernetes.io/projected/780fca18-2e85-4252-9b00-326e46d26eae-kube-api-access-l8dzd\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.092017 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780fca18-2e85-4252-9b00-326e46d26eae-combined-ca-bundle\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.092073 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/780fca18-2e85-4252-9b00-326e46d26eae-ovs-rundir\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.092125 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/780fca18-2e85-4252-9b00-326e46d26eae-ovn-rundir\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.092142 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/780fca18-2e85-4252-9b00-326e46d26eae-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.092233 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/780fca18-2e85-4252-9b00-326e46d26eae-config\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.106480 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 06 07:30:18 crc kubenswrapper[4769]: W1006 07:30:18.130064 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91209a46_4315_4dd2_91a5_7658b454d9ec.slice/crio-134e0e3555dd6507e054e3b24bd34cb27e6ef3fdb2833825fde582885e00753d WatchSource:0}: Error finding container 134e0e3555dd6507e054e3b24bd34cb27e6ef3fdb2833825fde582885e00753d: Status 404 returned error can't find the container with id 134e0e3555dd6507e054e3b24bd34cb27e6ef3fdb2833825fde582885e00753d Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.193880 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780fca18-2e85-4252-9b00-326e46d26eae-combined-ca-bundle\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.193926 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/780fca18-2e85-4252-9b00-326e46d26eae-ovs-rundir\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.193960 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/780fca18-2e85-4252-9b00-326e46d26eae-ovn-rundir\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.193977 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/780fca18-2e85-4252-9b00-326e46d26eae-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.194044 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/780fca18-2e85-4252-9b00-326e46d26eae-config\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.194073 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8dzd\" (UniqueName: \"kubernetes.io/projected/780fca18-2e85-4252-9b00-326e46d26eae-kube-api-access-l8dzd\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.196770 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/780fca18-2e85-4252-9b00-326e46d26eae-ovn-rundir\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.196898 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/780fca18-2e85-4252-9b00-326e46d26eae-ovs-rundir\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.198568 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c8c6fd99-24glj"] Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.201372 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/780fca18-2e85-4252-9b00-326e46d26eae-config\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.209127 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/780fca18-2e85-4252-9b00-326e46d26eae-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.209165 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780fca18-2e85-4252-9b00-326e46d26eae-combined-ca-bundle\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.222872 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8dzd\" (UniqueName: \"kubernetes.io/projected/780fca18-2e85-4252-9b00-326e46d26eae-kube-api-access-l8dzd\") pod \"ovn-controller-metrics-vh747\" (UID: \"780fca18-2e85-4252-9b00-326e46d26eae\") " pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.255589 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb69d786f-742h8"] Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.257058 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vh747" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.265192 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb69d786f-742h8"] Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.265301 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.269981 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.398187 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-config\") pod \"dnsmasq-dns-cb69d786f-742h8\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.398545 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wkbl\" (UniqueName: \"kubernetes.io/projected/ff15fac5-dae0-423d-803b-bd3caad160ae-kube-api-access-8wkbl\") pod \"dnsmasq-dns-cb69d786f-742h8\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.398615 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-ovsdbserver-sb\") pod \"dnsmasq-dns-cb69d786f-742h8\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.398682 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-dns-svc\") pod \"dnsmasq-dns-cb69d786f-742h8\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.501004 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-dns-svc\") pod \"dnsmasq-dns-cb69d786f-742h8\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.501272 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-config\") pod \"dnsmasq-dns-cb69d786f-742h8\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.501321 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wkbl\" (UniqueName: \"kubernetes.io/projected/ff15fac5-dae0-423d-803b-bd3caad160ae-kube-api-access-8wkbl\") pod \"dnsmasq-dns-cb69d786f-742h8\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.501445 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-ovsdbserver-sb\") pod \"dnsmasq-dns-cb69d786f-742h8\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.502039 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-dns-svc\") pod \"dnsmasq-dns-cb69d786f-742h8\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.502489 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-config\") pod \"dnsmasq-dns-cb69d786f-742h8\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.503585 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-ovsdbserver-sb\") pod \"dnsmasq-dns-cb69d786f-742h8\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.526561 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wkbl\" (UniqueName: \"kubernetes.io/projected/ff15fac5-dae0-423d-803b-bd3caad160ae-kube-api-access-8wkbl\") pod \"dnsmasq-dns-cb69d786f-742h8\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.592754 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.752797 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vh747"] Oct 06 07:30:18 crc kubenswrapper[4769]: W1006 07:30:18.767364 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod780fca18_2e85_4252_9b00_326e46d26eae.slice/crio-a2eada73b31ac90770795d34ef5cf65e31c71014055ef577d0a3b908cd1c4454 WatchSource:0}: Error finding container a2eada73b31ac90770795d34ef5cf65e31c71014055ef577d0a3b908cd1c4454: Status 404 returned error can't find the container with id a2eada73b31ac90770795d34ef5cf65e31c71014055ef577d0a3b908cd1c4454 Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.853019 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vh747" event={"ID":"780fca18-2e85-4252-9b00-326e46d26eae","Type":"ContainerStarted","Data":"a2eada73b31ac90770795d34ef5cf65e31c71014055ef577d0a3b908cd1c4454"} Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.854985 4769 generic.go:334] "Generic (PLEG): container finished" podID="b70a68a0-0a99-4cad-acf0-fec33240636a" containerID="320b90eb0cf5d2173b12e726cf162e4aebbe21752277ed694801000b6a4018ae" exitCode=0 Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.855046 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkkb2" event={"ID":"b70a68a0-0a99-4cad-acf0-fec33240636a","Type":"ContainerDied","Data":"320b90eb0cf5d2173b12e726cf162e4aebbe21752277ed694801000b6a4018ae"} Oct 06 07:30:18 crc kubenswrapper[4769]: I1006 07:30:18.869179 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"91209a46-4315-4dd2-91a5-7658b454d9ec","Type":"ContainerStarted","Data":"134e0e3555dd6507e054e3b24bd34cb27e6ef3fdb2833825fde582885e00753d"} Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.047867 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.053077 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.058377 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.058529 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.058637 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-smcfb" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.058987 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 06 07:30:19 crc kubenswrapper[4769]: W1006 07:30:19.070092 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff15fac5_dae0_423d_803b_bd3caad160ae.slice/crio-c17dca402377503c8c3aabaeae8566d0ebb22b1dd44639fe2460aa859dcc8b75 WatchSource:0}: Error finding container c17dca402377503c8c3aabaeae8566d0ebb22b1dd44639fe2460aa859dcc8b75: Status 404 returned error can't find the container with id c17dca402377503c8c3aabaeae8566d0ebb22b1dd44639fe2460aa859dcc8b75 Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.070129 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.074018 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb69d786f-742h8"] Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.214786 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1687c91c-da3f-4399-a0a8-b9d1769ddd30-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.214827 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1687c91c-da3f-4399-a0a8-b9d1769ddd30-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.214873 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1687c91c-da3f-4399-a0a8-b9d1769ddd30-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.214891 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1687c91c-da3f-4399-a0a8-b9d1769ddd30-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.214914 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1687c91c-da3f-4399-a0a8-b9d1769ddd30-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.214969 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.214996 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv2l2\" (UniqueName: \"kubernetes.io/projected/1687c91c-da3f-4399-a0a8-b9d1769ddd30-kube-api-access-dv2l2\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.215053 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1687c91c-da3f-4399-a0a8-b9d1769ddd30-config\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.316987 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv2l2\" (UniqueName: \"kubernetes.io/projected/1687c91c-da3f-4399-a0a8-b9d1769ddd30-kube-api-access-dv2l2\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.317120 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1687c91c-da3f-4399-a0a8-b9d1769ddd30-config\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.317171 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1687c91c-da3f-4399-a0a8-b9d1769ddd30-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.317190 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1687c91c-da3f-4399-a0a8-b9d1769ddd30-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.317264 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1687c91c-da3f-4399-a0a8-b9d1769ddd30-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.317281 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1687c91c-da3f-4399-a0a8-b9d1769ddd30-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.317318 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1687c91c-da3f-4399-a0a8-b9d1769ddd30-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.317385 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.317833 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.318113 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1687c91c-da3f-4399-a0a8-b9d1769ddd30-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.318721 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1687c91c-da3f-4399-a0a8-b9d1769ddd30-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.318853 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1687c91c-da3f-4399-a0a8-b9d1769ddd30-config\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.323504 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1687c91c-da3f-4399-a0a8-b9d1769ddd30-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.323880 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1687c91c-da3f-4399-a0a8-b9d1769ddd30-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.325788 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1687c91c-da3f-4399-a0a8-b9d1769ddd30-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.341957 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv2l2\" (UniqueName: \"kubernetes.io/projected/1687c91c-da3f-4399-a0a8-b9d1769ddd30-kube-api-access-dv2l2\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.348924 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"1687c91c-da3f-4399-a0a8-b9d1769ddd30\") " pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.383876 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.905072 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb69d786f-742h8" event={"ID":"ff15fac5-dae0-423d-803b-bd3caad160ae","Type":"ContainerStarted","Data":"c17dca402377503c8c3aabaeae8566d0ebb22b1dd44639fe2460aa859dcc8b75"} Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.909044 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkkb2" event={"ID":"b70a68a0-0a99-4cad-acf0-fec33240636a","Type":"ContainerStarted","Data":"7e7e33a9eea803bdd876b5bb3db9e983998706df42e73b0de6f0c313195b8f00"} Oct 06 07:30:19 crc kubenswrapper[4769]: I1006 07:30:19.920146 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 06 07:30:20 crc kubenswrapper[4769]: I1006 07:30:20.922467 4769 generic.go:334] "Generic (PLEG): container finished" podID="b70a68a0-0a99-4cad-acf0-fec33240636a" containerID="7e7e33a9eea803bdd876b5bb3db9e983998706df42e73b0de6f0c313195b8f00" exitCode=0 Oct 06 07:30:20 crc kubenswrapper[4769]: I1006 07:30:20.922530 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkkb2" event={"ID":"b70a68a0-0a99-4cad-acf0-fec33240636a","Type":"ContainerDied","Data":"7e7e33a9eea803bdd876b5bb3db9e983998706df42e73b0de6f0c313195b8f00"} Oct 06 07:30:20 crc kubenswrapper[4769]: I1006 07:30:20.924612 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1687c91c-da3f-4399-a0a8-b9d1769ddd30","Type":"ContainerStarted","Data":"84f2098329918a1308a2887c21efacc363d08ffd0b8e0d3b6970116a11dd2393"} Oct 06 07:30:22 crc kubenswrapper[4769]: I1006 07:30:22.061229 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 06 07:30:40 crc kubenswrapper[4769]: E1006 07:30:40.298309 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:d018fe05595a319a521aca6a2235ba72" Oct 06 07:30:40 crc kubenswrapper[4769]: E1006 07:30:40.299220 4769 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:d018fe05595a319a521aca6a2235ba72" Oct 06 07:30:40 crc kubenswrapper[4769]: E1006 07:30:40.299410 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:d018fe05595a319a521aca6a2235ba72,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5f57x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c4b488c7f-8ckl7_openstack(d7f8f2c8-d323-4671-b7fb-7419051e04c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 07:30:40 crc kubenswrapper[4769]: E1006 07:30:40.300611 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" podUID="d7f8f2c8-d323-4671-b7fb-7419051e04c7" Oct 06 07:30:41 crc kubenswrapper[4769]: E1006 07:30:41.228836 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:d018fe05595a319a521aca6a2235ba72" Oct 06 07:30:41 crc kubenswrapper[4769]: E1006 07:30:41.228887 4769 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:d018fe05595a319a521aca6a2235ba72" Oct 06 07:30:41 crc kubenswrapper[4769]: E1006 07:30:41.229006 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:d018fe05595a319a521aca6a2235ba72,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brtgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7f565b7f65-c65zg_openstack(b34ad133-6b32-4c98-9a90-c49503405aea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 07:30:41 crc kubenswrapper[4769]: E1006 07:30:41.230411 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" podUID="b34ad133-6b32-4c98-9a90-c49503405aea" Oct 06 07:30:42 crc kubenswrapper[4769]: I1006 07:30:42.881442 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" Oct 06 07:30:43 crc kubenswrapper[4769]: I1006 07:30:43.009470 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34ad133-6b32-4c98-9a90-c49503405aea-config\") pod \"b34ad133-6b32-4c98-9a90-c49503405aea\" (UID: \"b34ad133-6b32-4c98-9a90-c49503405aea\") " Oct 06 07:30:43 crc kubenswrapper[4769]: I1006 07:30:43.009710 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brtgw\" (UniqueName: \"kubernetes.io/projected/b34ad133-6b32-4c98-9a90-c49503405aea-kube-api-access-brtgw\") pod \"b34ad133-6b32-4c98-9a90-c49503405aea\" (UID: \"b34ad133-6b32-4c98-9a90-c49503405aea\") " Oct 06 07:30:43 crc kubenswrapper[4769]: I1006 07:30:43.009975 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b34ad133-6b32-4c98-9a90-c49503405aea-config" (OuterVolumeSpecName: "config") pod "b34ad133-6b32-4c98-9a90-c49503405aea" (UID: "b34ad133-6b32-4c98-9a90-c49503405aea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:30:43 crc kubenswrapper[4769]: I1006 07:30:43.010218 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34ad133-6b32-4c98-9a90-c49503405aea-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:43 crc kubenswrapper[4769]: I1006 07:30:43.014779 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34ad133-6b32-4c98-9a90-c49503405aea-kube-api-access-brtgw" (OuterVolumeSpecName: "kube-api-access-brtgw") pod "b34ad133-6b32-4c98-9a90-c49503405aea" (UID: "b34ad133-6b32-4c98-9a90-c49503405aea"). InnerVolumeSpecName "kube-api-access-brtgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:30:43 crc kubenswrapper[4769]: I1006 07:30:43.091860 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" event={"ID":"b34ad133-6b32-4c98-9a90-c49503405aea","Type":"ContainerDied","Data":"7b19d95f1b57bb7ae3531206ca590964196b40a63e3fe8fac207cbc587465b69"} Oct 06 07:30:43 crc kubenswrapper[4769]: I1006 07:30:43.091927 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f565b7f65-c65zg" Oct 06 07:30:43 crc kubenswrapper[4769]: I1006 07:30:43.111816 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brtgw\" (UniqueName: \"kubernetes.io/projected/b34ad133-6b32-4c98-9a90-c49503405aea-kube-api-access-brtgw\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:43 crc kubenswrapper[4769]: I1006 07:30:43.142859 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f565b7f65-c65zg"] Oct 06 07:30:43 crc kubenswrapper[4769]: I1006 07:30:43.147558 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f565b7f65-c65zg"] Oct 06 07:30:44 crc kubenswrapper[4769]: I1006 07:30:44.176721 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b34ad133-6b32-4c98-9a90-c49503405aea" path="/var/lib/kubelet/pods/b34ad133-6b32-4c98-9a90-c49503405aea/volumes" Oct 06 07:30:45 crc kubenswrapper[4769]: I1006 07:30:45.417682 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:45 crc kubenswrapper[4769]: I1006 07:30:45.552819 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f8f2c8-d323-4671-b7fb-7419051e04c7-dns-svc\") pod \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\" (UID: \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\") " Oct 06 07:30:45 crc kubenswrapper[4769]: I1006 07:30:45.552891 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f57x\" (UniqueName: \"kubernetes.io/projected/d7f8f2c8-d323-4671-b7fb-7419051e04c7-kube-api-access-5f57x\") pod \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\" (UID: \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\") " Oct 06 07:30:45 crc kubenswrapper[4769]: I1006 07:30:45.553043 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f8f2c8-d323-4671-b7fb-7419051e04c7-config\") pod \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\" (UID: \"d7f8f2c8-d323-4671-b7fb-7419051e04c7\") " Oct 06 07:30:45 crc kubenswrapper[4769]: I1006 07:30:45.553374 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7f8f2c8-d323-4671-b7fb-7419051e04c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7f8f2c8-d323-4671-b7fb-7419051e04c7" (UID: "d7f8f2c8-d323-4671-b7fb-7419051e04c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:30:45 crc kubenswrapper[4769]: I1006 07:30:45.553779 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7f8f2c8-d323-4671-b7fb-7419051e04c7-config" (OuterVolumeSpecName: "config") pod "d7f8f2c8-d323-4671-b7fb-7419051e04c7" (UID: "d7f8f2c8-d323-4671-b7fb-7419051e04c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:30:45 crc kubenswrapper[4769]: I1006 07:30:45.557556 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f8f2c8-d323-4671-b7fb-7419051e04c7-kube-api-access-5f57x" (OuterVolumeSpecName: "kube-api-access-5f57x") pod "d7f8f2c8-d323-4671-b7fb-7419051e04c7" (UID: "d7f8f2c8-d323-4671-b7fb-7419051e04c7"). InnerVolumeSpecName "kube-api-access-5f57x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:30:45 crc kubenswrapper[4769]: I1006 07:30:45.654681 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7f8f2c8-d323-4671-b7fb-7419051e04c7-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:45 crc kubenswrapper[4769]: I1006 07:30:45.654717 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7f8f2c8-d323-4671-b7fb-7419051e04c7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:45 crc kubenswrapper[4769]: I1006 07:30:45.654726 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f57x\" (UniqueName: \"kubernetes.io/projected/d7f8f2c8-d323-4671-b7fb-7419051e04c7-kube-api-access-5f57x\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:46 crc kubenswrapper[4769]: I1006 07:30:46.128440 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkkb2" event={"ID":"b70a68a0-0a99-4cad-acf0-fec33240636a","Type":"ContainerStarted","Data":"b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e"} Oct 06 07:30:46 crc kubenswrapper[4769]: I1006 07:30:46.130512 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" event={"ID":"d7f8f2c8-d323-4671-b7fb-7419051e04c7","Type":"ContainerDied","Data":"43598bdeb18d1d3605797c00931cf3cf9c4fb12a191627900a82814e4029de05"} Oct 06 07:30:46 crc kubenswrapper[4769]: I1006 07:30:46.130567 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4b488c7f-8ckl7" Oct 06 07:30:46 crc kubenswrapper[4769]: I1006 07:30:46.154236 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bkkb2" podStartSLOduration=11.203986377 podStartE2EDuration="30.154216622s" podCreationTimestamp="2025-10-06 07:30:16 +0000 UTC" firstStartedPulling="2025-10-06 07:30:18.857940366 +0000 UTC m=+815.382221513" lastFinishedPulling="2025-10-06 07:30:37.808170611 +0000 UTC m=+834.332451758" observedRunningTime="2025-10-06 07:30:46.146823039 +0000 UTC m=+842.671104196" watchObservedRunningTime="2025-10-06 07:30:46.154216622 +0000 UTC m=+842.678497769" Oct 06 07:30:46 crc kubenswrapper[4769]: I1006 07:30:46.200593 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c4b488c7f-8ckl7"] Oct 06 07:30:46 crc kubenswrapper[4769]: I1006 07:30:46.213728 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c4b488c7f-8ckl7"] Oct 06 07:30:46 crc kubenswrapper[4769]: I1006 07:30:46.884284 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:46 crc kubenswrapper[4769]: I1006 07:30:46.884832 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.140254 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"71170300-e629-4a76-8960-29bdc328edf5","Type":"ContainerStarted","Data":"c44701a1cdbdda499ba34c6f828c6dbcfe6ffae6312e2e391204849f4e65a072"} Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.141876 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vh747" event={"ID":"780fca18-2e85-4252-9b00-326e46d26eae","Type":"ContainerStarted","Data":"f9fe65ebf48acb1d93532f12432ce94de9d83492878e086b4e02c29dc92e7b5e"} Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.145368 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" event={"ID":"41106e8f-850b-4dcc-a54a-8baf17de89f7","Type":"ContainerStarted","Data":"cfa2c927b2f9d0c02a8e74164a7577e13554b216e3189e00b2755f089af7510e"} Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.149698 4769 generic.go:334] "Generic (PLEG): container finished" podID="e2389387-bd73-49fb-b119-41f04a08f615" containerID="c1deb579e154ab1ec002bb2a628d038ab9566c458bc443e8f81a0ea9a3ca13b3" exitCode=0 Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.149771 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdb5d" event={"ID":"e2389387-bd73-49fb-b119-41f04a08f615","Type":"ContainerDied","Data":"c1deb579e154ab1ec002bb2a628d038ab9566c458bc443e8f81a0ea9a3ca13b3"} Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.152906 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c804d714-a4d7-487d-b52a-bb2e1a47f36b","Type":"ContainerStarted","Data":"ad5f0f8c9efff13f20973dc2c9ca8345827b1774e55da172539fe8de2ca61617"} Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.152965 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.205474 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vh747" podStartSLOduration=11.169489571 podStartE2EDuration="30.20545638s" podCreationTimestamp="2025-10-06 07:30:17 +0000 UTC" firstStartedPulling="2025-10-06 07:30:18.769835357 +0000 UTC m=+815.294116504" lastFinishedPulling="2025-10-06 07:30:37.805802126 +0000 UTC m=+834.330083313" observedRunningTime="2025-10-06 07:30:47.203724733 +0000 UTC m=+843.728005920" watchObservedRunningTime="2025-10-06 07:30:47.20545638 +0000 UTC m=+843.729737527" Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.225945 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.971101403 podStartE2EDuration="38.22592662s" podCreationTimestamp="2025-10-06 07:30:09 +0000 UTC" firstStartedPulling="2025-10-06 07:30:10.586893284 +0000 UTC m=+807.111174431" lastFinishedPulling="2025-10-06 07:30:45.841718491 +0000 UTC m=+842.365999648" observedRunningTime="2025-10-06 07:30:47.223860884 +0000 UTC m=+843.748142031" watchObservedRunningTime="2025-10-06 07:30:47.22592662 +0000 UTC m=+843.750207767" Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.716520 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb77f6b6c-kz76s"] Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.743391 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c4cb665f-mg486"] Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.744726 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.755504 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.755740 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c4cb665f-mg486"] Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.901144 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-ovsdbserver-nb\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.901290 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfx6t\" (UniqueName: \"kubernetes.io/projected/61ee5649-5628-4caf-af79-6b6f6b944c79-kube-api-access-xfx6t\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.901351 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-dns-svc\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.901383 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-config\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:47 crc kubenswrapper[4769]: I1006 07:30:47.901432 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-ovsdbserver-sb\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.002287 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-dns-svc\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.002326 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-config\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.002349 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-ovsdbserver-sb\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.002396 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-ovsdbserver-nb\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.002482 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfx6t\" (UniqueName: \"kubernetes.io/projected/61ee5649-5628-4caf-af79-6b6f6b944c79-kube-api-access-xfx6t\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.003195 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-config\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.003262 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-dns-svc\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.003341 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-ovsdbserver-sb\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.004122 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-ovsdbserver-nb\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.023068 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfx6t\" (UniqueName: \"kubernetes.io/projected/61ee5649-5628-4caf-af79-6b6f6b944c79-kube-api-access-xfx6t\") pod \"dnsmasq-dns-78c4cb665f-mg486\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.164381 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"654c5c70-fc54-4a56-9fb8-c1ffe32089ca","Type":"ContainerStarted","Data":"3357c991a8b7085ad0995bf2bb18c4579abdee907c0b8d5108a27ad92d925d5a"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.165955 4769 generic.go:334] "Generic (PLEG): container finished" podID="500381ee-62ba-4309-b58e-1e398c1f8ea2" containerID="9aa870d175e0fdab26f06bc23897eab638a8762272e5bbf0260f1f2f4d30ab22" exitCode=0 Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.167374 4769 generic.go:334] "Generic (PLEG): container finished" podID="41106e8f-850b-4dcc-a54a-8baf17de89f7" containerID="cfa2c927b2f9d0c02a8e74164a7577e13554b216e3189e00b2755f089af7510e" exitCode=0 Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.169354 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff15fac5-dae0-423d-803b-bd3caad160ae" containerID="52ccb4bb9977f06e998827beed4e1e6379701b7ddb59c101e4bc3e66ed5f4b9a" exitCode=0 Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.181589 4769 generic.go:334] "Generic (PLEG): container finished" podID="530bb6cc-e4fa-42bf-88aa-38020d3b5513" containerID="a7fef9751e11b71658aaf1e9790e0c0e4b505db49dc9dc6d3487881663b46878" exitCode=0 Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.184408 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f8f2c8-d323-4671-b7fb-7419051e04c7" path="/var/lib/kubelet/pods/d7f8f2c8-d323-4671-b7fb-7419051e04c7/volumes" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.185406 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" event={"ID":"500381ee-62ba-4309-b58e-1e398c1f8ea2","Type":"ContainerDied","Data":"9aa870d175e0fdab26f06bc23897eab638a8762272e5bbf0260f1f2f4d30ab22"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.185516 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" event={"ID":"41106e8f-850b-4dcc-a54a-8baf17de89f7","Type":"ContainerDied","Data":"cfa2c927b2f9d0c02a8e74164a7577e13554b216e3189e00b2755f089af7510e"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.185579 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb69d786f-742h8" event={"ID":"ff15fac5-dae0-423d-803b-bd3caad160ae","Type":"ContainerDied","Data":"52ccb4bb9977f06e998827beed4e1e6379701b7ddb59c101e4bc3e66ed5f4b9a"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.185660 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1687c91c-da3f-4399-a0a8-b9d1769ddd30","Type":"ContainerStarted","Data":"3d79fee57f465b3ac6a38aade48c6a3436870d0901a44e190be39b87a613cc75"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.185728 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"1687c91c-da3f-4399-a0a8-b9d1769ddd30","Type":"ContainerStarted","Data":"442155cbb1acac09e35d6a10cc8614df3ce2930f2b796fa25b02de4af5d05619"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.185792 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4qmzc" event={"ID":"530bb6cc-e4fa-42bf-88aa-38020d3b5513","Type":"ContainerDied","Data":"a7fef9751e11b71658aaf1e9790e0c0e4b505db49dc9dc6d3487881663b46878"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.209887 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdj69" event={"ID":"7666a29e-0c83-4099-ae6d-1fc333d3c630","Type":"ContainerStarted","Data":"7ac72dbbd494c3abd3bcb70ee5833af74766ef3193477218aaf1b5f5b6b69081"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.210491 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-rdj69" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.219189 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdb5d" event={"ID":"e2389387-bd73-49fb-b119-41f04a08f615","Type":"ContainerStarted","Data":"4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.232443 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"687dbbb5-7929-4674-9ac4-77ec7ff8e424","Type":"ContainerStarted","Data":"b58d53d8162b24b17b904533df7295d3686167f888afca6915c2e31e03199dd4"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.253207 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"91209a46-4315-4dd2-91a5-7658b454d9ec","Type":"ContainerStarted","Data":"38ecd09a0d97ad2386aa8996d22a66ea31766aa10c18063e0edd1d050d22b40c"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.253488 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"91209a46-4315-4dd2-91a5-7658b454d9ec","Type":"ContainerStarted","Data":"81139b8dba62c9093d922275b695cc84780b9120c473c4cc199dcc2d571cebe5"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.253904 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.258782 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c556df6a-9389-4852-b1d8-ba7bbf8bc614","Type":"ContainerStarted","Data":"1c5698815f65af07134574db2f002b37087188d4d11f54541949c9762353cdd7"} Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.363086 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bkkb2" podUID="b70a68a0-0a99-4cad-acf0-fec33240636a" containerName="registry-server" probeResult="failure" output=< Oct 06 07:30:48 crc kubenswrapper[4769]: timeout: failed to connect service ":50051" within 1s Oct 06 07:30:48 crc kubenswrapper[4769]: > Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.451567 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.084612023 podStartE2EDuration="30.451545355s" podCreationTimestamp="2025-10-06 07:30:18 +0000 UTC" firstStartedPulling="2025-10-06 07:30:19.938658615 +0000 UTC m=+816.462939752" lastFinishedPulling="2025-10-06 07:30:47.305591927 +0000 UTC m=+843.829873084" observedRunningTime="2025-10-06 07:30:48.442862108 +0000 UTC m=+844.967143255" watchObservedRunningTime="2025-10-06 07:30:48.451545355 +0000 UTC m=+844.975826502" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.525165 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xdb5d" podStartSLOduration=3.719497148 podStartE2EDuration="33.525145976s" podCreationTimestamp="2025-10-06 07:30:15 +0000 UTC" firstStartedPulling="2025-10-06 07:30:17.841818462 +0000 UTC m=+814.366099609" lastFinishedPulling="2025-10-06 07:30:47.64746729 +0000 UTC m=+844.171748437" observedRunningTime="2025-10-06 07:30:48.524685634 +0000 UTC m=+845.048966781" watchObservedRunningTime="2025-10-06 07:30:48.525145976 +0000 UTC m=+845.049427123" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.530672 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rdj69" podStartSLOduration=4.028393712 podStartE2EDuration="33.530655017s" podCreationTimestamp="2025-10-06 07:30:15 +0000 UTC" firstStartedPulling="2025-10-06 07:30:16.443106718 +0000 UTC m=+812.967387865" lastFinishedPulling="2025-10-06 07:30:45.945368023 +0000 UTC m=+842.469649170" observedRunningTime="2025-10-06 07:30:48.502087646 +0000 UTC m=+845.026368793" watchObservedRunningTime="2025-10-06 07:30:48.530655017 +0000 UTC m=+845.054936164" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.605635 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.745253464 podStartE2EDuration="33.605600005s" podCreationTimestamp="2025-10-06 07:30:15 +0000 UTC" firstStartedPulling="2025-10-06 07:30:18.142296948 +0000 UTC m=+814.666578095" lastFinishedPulling="2025-10-06 07:30:46.002643489 +0000 UTC m=+842.526924636" observedRunningTime="2025-10-06 07:30:48.601668567 +0000 UTC m=+845.125949714" watchObservedRunningTime="2025-10-06 07:30:48.605600005 +0000 UTC m=+845.129881152" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.727906 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.851397 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500381ee-62ba-4309-b58e-1e398c1f8ea2-config\") pod \"500381ee-62ba-4309-b58e-1e398c1f8ea2\" (UID: \"500381ee-62ba-4309-b58e-1e398c1f8ea2\") " Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.852124 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wxmx\" (UniqueName: \"kubernetes.io/projected/500381ee-62ba-4309-b58e-1e398c1f8ea2-kube-api-access-4wxmx\") pod \"500381ee-62ba-4309-b58e-1e398c1f8ea2\" (UID: \"500381ee-62ba-4309-b58e-1e398c1f8ea2\") " Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.852326 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/500381ee-62ba-4309-b58e-1e398c1f8ea2-dns-svc\") pod \"500381ee-62ba-4309-b58e-1e398c1f8ea2\" (UID: \"500381ee-62ba-4309-b58e-1e398c1f8ea2\") " Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.859089 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500381ee-62ba-4309-b58e-1e398c1f8ea2-kube-api-access-4wxmx" (OuterVolumeSpecName: "kube-api-access-4wxmx") pod "500381ee-62ba-4309-b58e-1e398c1f8ea2" (UID: "500381ee-62ba-4309-b58e-1e398c1f8ea2"). InnerVolumeSpecName "kube-api-access-4wxmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.871087 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/500381ee-62ba-4309-b58e-1e398c1f8ea2-config" (OuterVolumeSpecName: "config") pod "500381ee-62ba-4309-b58e-1e398c1f8ea2" (UID: "500381ee-62ba-4309-b58e-1e398c1f8ea2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.872318 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/500381ee-62ba-4309-b58e-1e398c1f8ea2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "500381ee-62ba-4309-b58e-1e398c1f8ea2" (UID: "500381ee-62ba-4309-b58e-1e398c1f8ea2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.954551 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500381ee-62ba-4309-b58e-1e398c1f8ea2-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.954585 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wxmx\" (UniqueName: \"kubernetes.io/projected/500381ee-62ba-4309-b58e-1e398c1f8ea2-kube-api-access-4wxmx\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.954596 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/500381ee-62ba-4309-b58e-1e398c1f8ea2-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:48 crc kubenswrapper[4769]: I1006 07:30:48.991971 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c4cb665f-mg486"] Oct 06 07:30:49 crc kubenswrapper[4769]: W1006 07:30:49.001597 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61ee5649_5628_4caf_af79_6b6f6b944c79.slice/crio-ecc58fb541af88abd939c6eda51c4d5fd9fe337ce5aca1a662d26b88125b9290 WatchSource:0}: Error finding container ecc58fb541af88abd939c6eda51c4d5fd9fe337ce5aca1a662d26b88125b9290: Status 404 returned error can't find the container with id ecc58fb541af88abd939c6eda51c4d5fd9fe337ce5aca1a662d26b88125b9290 Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.269463 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb69d786f-742h8" event={"ID":"ff15fac5-dae0-423d-803b-bd3caad160ae","Type":"ContainerStarted","Data":"ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30"} Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.269757 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.272104 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4qmzc" event={"ID":"530bb6cc-e4fa-42bf-88aa-38020d3b5513","Type":"ContainerStarted","Data":"461d126cc512301e99d2148238edbf04e2f5755f321644ef73a78c9f509f703c"} Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.272149 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-4qmzc" event={"ID":"530bb6cc-e4fa-42bf-88aa-38020d3b5513","Type":"ContainerStarted","Data":"6c12293593105ad26315ce719040f7656f348fadcca454f7f66b29405feda5b1"} Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.272223 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.272254 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.273390 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" event={"ID":"500381ee-62ba-4309-b58e-1e398c1f8ea2","Type":"ContainerDied","Data":"1f5958623c6b075675e99c3ea23554cdc15fbf599b7b6800086d206d5527a96e"} Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.273400 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c8c6fd99-24glj" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.273461 4769 scope.go:117] "RemoveContainer" containerID="9aa870d175e0fdab26f06bc23897eab638a8762272e5bbf0260f1f2f4d30ab22" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.274621 4769 generic.go:334] "Generic (PLEG): container finished" podID="61ee5649-5628-4caf-af79-6b6f6b944c79" containerID="a5838fce251c886058c1953797d392ea4195046e3c7977064d2a950f64013108" exitCode=0 Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.274694 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" event={"ID":"61ee5649-5628-4caf-af79-6b6f6b944c79","Type":"ContainerDied","Data":"a5838fce251c886058c1953797d392ea4195046e3c7977064d2a950f64013108"} Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.274733 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" event={"ID":"61ee5649-5628-4caf-af79-6b6f6b944c79","Type":"ContainerStarted","Data":"ecc58fb541af88abd939c6eda51c4d5fd9fe337ce5aca1a662d26b88125b9290"} Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.278414 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" podUID="41106e8f-850b-4dcc-a54a-8baf17de89f7" containerName="dnsmasq-dns" containerID="cri-o://71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d" gracePeriod=10 Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.278670 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" event={"ID":"41106e8f-850b-4dcc-a54a-8baf17de89f7","Type":"ContainerStarted","Data":"71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d"} Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.280139 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.298264 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb69d786f-742h8" podStartSLOduration=4.433375432 podStartE2EDuration="31.298246614s" podCreationTimestamp="2025-10-06 07:30:18 +0000 UTC" firstStartedPulling="2025-10-06 07:30:19.08082227 +0000 UTC m=+815.605103417" lastFinishedPulling="2025-10-06 07:30:45.945693452 +0000 UTC m=+842.469974599" observedRunningTime="2025-10-06 07:30:49.289435973 +0000 UTC m=+845.813717140" watchObservedRunningTime="2025-10-06 07:30:49.298246614 +0000 UTC m=+845.822527761" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.310327 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" podStartSLOduration=4.680739501 podStartE2EDuration="44.310307133s" podCreationTimestamp="2025-10-06 07:30:05 +0000 UTC" firstStartedPulling="2025-10-06 07:30:06.211751698 +0000 UTC m=+802.736032845" lastFinishedPulling="2025-10-06 07:30:45.84131934 +0000 UTC m=+842.365600477" observedRunningTime="2025-10-06 07:30:49.310023666 +0000 UTC m=+845.834304813" watchObservedRunningTime="2025-10-06 07:30:49.310307133 +0000 UTC m=+845.834588290" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.359610 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-4qmzc" podStartSLOduration=5.27082989 podStartE2EDuration="34.359592331s" podCreationTimestamp="2025-10-06 07:30:15 +0000 UTC" firstStartedPulling="2025-10-06 07:30:16.752748574 +0000 UTC m=+813.277029721" lastFinishedPulling="2025-10-06 07:30:45.841510995 +0000 UTC m=+842.365792162" observedRunningTime="2025-10-06 07:30:49.354331067 +0000 UTC m=+845.878612214" watchObservedRunningTime="2025-10-06 07:30:49.359592331 +0000 UTC m=+845.883873478" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.387849 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.388260 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.400112 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c8c6fd99-24glj"] Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.415975 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84c8c6fd99-24glj"] Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.714443 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.767978 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq25v\" (UniqueName: \"kubernetes.io/projected/41106e8f-850b-4dcc-a54a-8baf17de89f7-kube-api-access-nq25v\") pod \"41106e8f-850b-4dcc-a54a-8baf17de89f7\" (UID: \"41106e8f-850b-4dcc-a54a-8baf17de89f7\") " Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.768170 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41106e8f-850b-4dcc-a54a-8baf17de89f7-dns-svc\") pod \"41106e8f-850b-4dcc-a54a-8baf17de89f7\" (UID: \"41106e8f-850b-4dcc-a54a-8baf17de89f7\") " Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.768240 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41106e8f-850b-4dcc-a54a-8baf17de89f7-config\") pod \"41106e8f-850b-4dcc-a54a-8baf17de89f7\" (UID: \"41106e8f-850b-4dcc-a54a-8baf17de89f7\") " Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.772548 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41106e8f-850b-4dcc-a54a-8baf17de89f7-kube-api-access-nq25v" (OuterVolumeSpecName: "kube-api-access-nq25v") pod "41106e8f-850b-4dcc-a54a-8baf17de89f7" (UID: "41106e8f-850b-4dcc-a54a-8baf17de89f7"). InnerVolumeSpecName "kube-api-access-nq25v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.806494 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41106e8f-850b-4dcc-a54a-8baf17de89f7-config" (OuterVolumeSpecName: "config") pod "41106e8f-850b-4dcc-a54a-8baf17de89f7" (UID: "41106e8f-850b-4dcc-a54a-8baf17de89f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.815224 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41106e8f-850b-4dcc-a54a-8baf17de89f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41106e8f-850b-4dcc-a54a-8baf17de89f7" (UID: "41106e8f-850b-4dcc-a54a-8baf17de89f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.870268 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41106e8f-850b-4dcc-a54a-8baf17de89f7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.870302 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41106e8f-850b-4dcc-a54a-8baf17de89f7-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:49 crc kubenswrapper[4769]: I1006 07:30:49.870314 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq25v\" (UniqueName: \"kubernetes.io/projected/41106e8f-850b-4dcc-a54a-8baf17de89f7-kube-api-access-nq25v\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.181475 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="500381ee-62ba-4309-b58e-1e398c1f8ea2" path="/var/lib/kubelet/pods/500381ee-62ba-4309-b58e-1e398c1f8ea2/volumes" Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.286694 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" event={"ID":"61ee5649-5628-4caf-af79-6b6f6b944c79","Type":"ContainerStarted","Data":"97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce"} Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.287296 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.289006 4769 generic.go:334] "Generic (PLEG): container finished" podID="41106e8f-850b-4dcc-a54a-8baf17de89f7" containerID="71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d" exitCode=0 Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.289820 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.290239 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" event={"ID":"41106e8f-850b-4dcc-a54a-8baf17de89f7","Type":"ContainerDied","Data":"71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d"} Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.290265 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb77f6b6c-kz76s" event={"ID":"41106e8f-850b-4dcc-a54a-8baf17de89f7","Type":"ContainerDied","Data":"46402d7694437602fd2036470a5fb8f4420407cf0e096ae6367db089f5ea2268"} Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.290281 4769 scope.go:117] "RemoveContainer" containerID="71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d" Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.305842 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" podStartSLOduration=3.30582645 podStartE2EDuration="3.30582645s" podCreationTimestamp="2025-10-06 07:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:30:50.303648141 +0000 UTC m=+846.827929288" watchObservedRunningTime="2025-10-06 07:30:50.30582645 +0000 UTC m=+846.830107597" Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.311197 4769 scope.go:117] "RemoveContainer" containerID="cfa2c927b2f9d0c02a8e74164a7577e13554b216e3189e00b2755f089af7510e" Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.323721 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb77f6b6c-kz76s"] Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.328891 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb77f6b6c-kz76s"] Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.329297 4769 scope.go:117] "RemoveContainer" containerID="71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d" Oct 06 07:30:50 crc kubenswrapper[4769]: E1006 07:30:50.329847 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d\": container with ID starting with 71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d not found: ID does not exist" containerID="71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d" Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.329883 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d"} err="failed to get container status \"71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d\": rpc error: code = NotFound desc = could not find container \"71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d\": container with ID starting with 71da51058bf15fe27e87dabd8a34347a37ef80ee052ccecddc807f4f3189754d not found: ID does not exist" Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.329914 4769 scope.go:117] "RemoveContainer" containerID="cfa2c927b2f9d0c02a8e74164a7577e13554b216e3189e00b2755f089af7510e" Oct 06 07:30:50 crc kubenswrapper[4769]: E1006 07:30:50.330289 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfa2c927b2f9d0c02a8e74164a7577e13554b216e3189e00b2755f089af7510e\": container with ID starting with cfa2c927b2f9d0c02a8e74164a7577e13554b216e3189e00b2755f089af7510e not found: ID does not exist" containerID="cfa2c927b2f9d0c02a8e74164a7577e13554b216e3189e00b2755f089af7510e" Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.330323 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfa2c927b2f9d0c02a8e74164a7577e13554b216e3189e00b2755f089af7510e"} err="failed to get container status \"cfa2c927b2f9d0c02a8e74164a7577e13554b216e3189e00b2755f089af7510e\": rpc error: code = NotFound desc = could not find container \"cfa2c927b2f9d0c02a8e74164a7577e13554b216e3189e00b2755f089af7510e\": container with ID starting with cfa2c927b2f9d0c02a8e74164a7577e13554b216e3189e00b2755f089af7510e not found: ID does not exist" Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.363733 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:50 crc kubenswrapper[4769]: I1006 07:30:50.400985 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:51 crc kubenswrapper[4769]: I1006 07:30:51.312248 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.174993 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41106e8f-850b-4dcc-a54a-8baf17de89f7" path="/var/lib/kubelet/pods/41106e8f-850b-4dcc-a54a-8baf17de89f7/volumes" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.320989 4769 generic.go:334] "Generic (PLEG): container finished" podID="71170300-e629-4a76-8960-29bdc328edf5" containerID="c44701a1cdbdda499ba34c6f828c6dbcfe6ffae6312e2e391204849f4e65a072" exitCode=0 Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.321067 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"71170300-e629-4a76-8960-29bdc328edf5","Type":"ContainerDied","Data":"c44701a1cdbdda499ba34c6f828c6dbcfe6ffae6312e2e391204849f4e65a072"} Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.372011 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.523863 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.574965 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.797361 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 06 07:30:52 crc kubenswrapper[4769]: E1006 07:30:52.797650 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41106e8f-850b-4dcc-a54a-8baf17de89f7" containerName="init" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.797667 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="41106e8f-850b-4dcc-a54a-8baf17de89f7" containerName="init" Oct 06 07:30:52 crc kubenswrapper[4769]: E1006 07:30:52.797677 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41106e8f-850b-4dcc-a54a-8baf17de89f7" containerName="dnsmasq-dns" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.797685 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="41106e8f-850b-4dcc-a54a-8baf17de89f7" containerName="dnsmasq-dns" Oct 06 07:30:52 crc kubenswrapper[4769]: E1006 07:30:52.797699 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="500381ee-62ba-4309-b58e-1e398c1f8ea2" containerName="init" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.797706 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="500381ee-62ba-4309-b58e-1e398c1f8ea2" containerName="init" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.797858 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="41106e8f-850b-4dcc-a54a-8baf17de89f7" containerName="dnsmasq-dns" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.797871 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="500381ee-62ba-4309-b58e-1e398c1f8ea2" containerName="init" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.798708 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.801400 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.801544 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9vk59" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.801968 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.810954 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.811222 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.916438 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nmct\" (UniqueName: \"kubernetes.io/projected/1e249936-c0cd-44da-9ad9-c11c17090006-kube-api-access-6nmct\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.916498 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e249936-c0cd-44da-9ad9-c11c17090006-scripts\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.916555 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e249936-c0cd-44da-9ad9-c11c17090006-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.916726 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e249936-c0cd-44da-9ad9-c11c17090006-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.916773 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e249936-c0cd-44da-9ad9-c11c17090006-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.916862 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e249936-c0cd-44da-9ad9-c11c17090006-config\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:52 crc kubenswrapper[4769]: I1006 07:30:52.917021 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e249936-c0cd-44da-9ad9-c11c17090006-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.018319 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e249936-c0cd-44da-9ad9-c11c17090006-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.018391 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nmct\" (UniqueName: \"kubernetes.io/projected/1e249936-c0cd-44da-9ad9-c11c17090006-kube-api-access-6nmct\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.018442 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e249936-c0cd-44da-9ad9-c11c17090006-scripts\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.018492 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e249936-c0cd-44da-9ad9-c11c17090006-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.018557 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e249936-c0cd-44da-9ad9-c11c17090006-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.018573 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e249936-c0cd-44da-9ad9-c11c17090006-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.018594 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e249936-c0cd-44da-9ad9-c11c17090006-config\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.018888 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e249936-c0cd-44da-9ad9-c11c17090006-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.019397 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e249936-c0cd-44da-9ad9-c11c17090006-scripts\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.019524 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e249936-c0cd-44da-9ad9-c11c17090006-config\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.024079 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e249936-c0cd-44da-9ad9-c11c17090006-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.024148 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e249936-c0cd-44da-9ad9-c11c17090006-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.024774 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e249936-c0cd-44da-9ad9-c11c17090006-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.044933 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nmct\" (UniqueName: \"kubernetes.io/projected/1e249936-c0cd-44da-9ad9-c11c17090006-kube-api-access-6nmct\") pod \"ovn-northd-0\" (UID: \"1e249936-c0cd-44da-9ad9-c11c17090006\") " pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.114907 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.330634 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"71170300-e629-4a76-8960-29bdc328edf5","Type":"ContainerStarted","Data":"27dbe40ece92a1d83392c045ae5e54442165ea4e71f42d284a310fa090353065"} Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.331914 4769 generic.go:334] "Generic (PLEG): container finished" podID="687dbbb5-7929-4674-9ac4-77ec7ff8e424" containerID="b58d53d8162b24b17b904533df7295d3686167f888afca6915c2e31e03199dd4" exitCode=0 Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.332134 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"687dbbb5-7929-4674-9ac4-77ec7ff8e424","Type":"ContainerDied","Data":"b58d53d8162b24b17b904533df7295d3686167f888afca6915c2e31e03199dd4"} Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.352369 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.847152157 podStartE2EDuration="45.352351169s" podCreationTimestamp="2025-10-06 07:30:08 +0000 UTC" firstStartedPulling="2025-10-06 07:30:10.367185277 +0000 UTC m=+806.891466424" lastFinishedPulling="2025-10-06 07:30:45.872384289 +0000 UTC m=+842.396665436" observedRunningTime="2025-10-06 07:30:53.352244286 +0000 UTC m=+849.876525433" watchObservedRunningTime="2025-10-06 07:30:53.352351169 +0000 UTC m=+849.876632316" Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.500175 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 06 07:30:53 crc kubenswrapper[4769]: W1006 07:30:53.512134 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e249936_c0cd_44da_9ad9_c11c17090006.slice/crio-cff2e020c71c1531831ba7afe7c16e12384b24f8b0b9a06b5e227437ba3cf8b1 WatchSource:0}: Error finding container cff2e020c71c1531831ba7afe7c16e12384b24f8b0b9a06b5e227437ba3cf8b1: Status 404 returned error can't find the container with id cff2e020c71c1531831ba7afe7c16e12384b24f8b0b9a06b5e227437ba3cf8b1 Oct 06 07:30:53 crc kubenswrapper[4769]: I1006 07:30:53.594840 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:54 crc kubenswrapper[4769]: I1006 07:30:54.352199 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"687dbbb5-7929-4674-9ac4-77ec7ff8e424","Type":"ContainerStarted","Data":"48337d735055d98642ff4f593f5158f2c8a6951c9b6ca4a41388b4e3140f101a"} Oct 06 07:30:54 crc kubenswrapper[4769]: I1006 07:30:54.353715 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1e249936-c0cd-44da-9ad9-c11c17090006","Type":"ContainerStarted","Data":"f3921b608dc034ebb1947693691c551dbd3f4a1b7f97dbeb36f2ab478e9922e5"} Oct 06 07:30:54 crc kubenswrapper[4769]: I1006 07:30:54.353740 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1e249936-c0cd-44da-9ad9-c11c17090006","Type":"ContainerStarted","Data":"cff2e020c71c1531831ba7afe7c16e12384b24f8b0b9a06b5e227437ba3cf8b1"} Oct 06 07:30:54 crc kubenswrapper[4769]: I1006 07:30:54.381315 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.589856813 podStartE2EDuration="46.38129358s" podCreationTimestamp="2025-10-06 07:30:08 +0000 UTC" firstStartedPulling="2025-10-06 07:30:10.151475759 +0000 UTC m=+806.675756896" lastFinishedPulling="2025-10-06 07:30:45.942912516 +0000 UTC m=+842.467193663" observedRunningTime="2025-10-06 07:30:54.373577058 +0000 UTC m=+850.897858225" watchObservedRunningTime="2025-10-06 07:30:54.38129358 +0000 UTC m=+850.905574717" Oct 06 07:30:55 crc kubenswrapper[4769]: I1006 07:30:55.109555 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 06 07:30:55 crc kubenswrapper[4769]: I1006 07:30:55.366549 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1e249936-c0cd-44da-9ad9-c11c17090006","Type":"ContainerStarted","Data":"76e6308a59b623e35f22628ea04911d34f5ec8404730f1e8f1262c18134a5b1b"} Oct 06 07:30:55 crc kubenswrapper[4769]: I1006 07:30:55.367746 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 06 07:30:55 crc kubenswrapper[4769]: I1006 07:30:55.391918 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.7450860820000003 podStartE2EDuration="3.391899748s" podCreationTimestamp="2025-10-06 07:30:52 +0000 UTC" firstStartedPulling="2025-10-06 07:30:53.514319126 +0000 UTC m=+850.038600273" lastFinishedPulling="2025-10-06 07:30:54.161132792 +0000 UTC m=+850.685413939" observedRunningTime="2025-10-06 07:30:55.386344876 +0000 UTC m=+851.910626043" watchObservedRunningTime="2025-10-06 07:30:55.391899748 +0000 UTC m=+851.916180895" Oct 06 07:30:55 crc kubenswrapper[4769]: E1006 07:30:55.393626 4769 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.5:51502->38.102.83.5:43105: read tcp 38.102.83.5:51502->38.102.83.5:43105: read: connection reset by peer Oct 06 07:30:56 crc kubenswrapper[4769]: I1006 07:30:56.338352 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:56 crc kubenswrapper[4769]: I1006 07:30:56.338642 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:56 crc kubenswrapper[4769]: I1006 07:30:56.410399 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:56 crc kubenswrapper[4769]: I1006 07:30:56.928804 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:56 crc kubenswrapper[4769]: I1006 07:30:56.977353 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:57 crc kubenswrapper[4769]: I1006 07:30:57.441538 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:30:57 crc kubenswrapper[4769]: I1006 07:30:57.647395 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bkkb2"] Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.256602 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.318816 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb69d786f-742h8"] Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.319100 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cb69d786f-742h8" podUID="ff15fac5-dae0-423d-803b-bd3caad160ae" containerName="dnsmasq-dns" containerID="cri-o://ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30" gracePeriod=10 Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.387286 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bkkb2" podUID="b70a68a0-0a99-4cad-acf0-fec33240636a" containerName="registry-server" containerID="cri-o://b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e" gracePeriod=2 Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.815918 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.828649 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.907280 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f4wv\" (UniqueName: \"kubernetes.io/projected/b70a68a0-0a99-4cad-acf0-fec33240636a-kube-api-access-7f4wv\") pod \"b70a68a0-0a99-4cad-acf0-fec33240636a\" (UID: \"b70a68a0-0a99-4cad-acf0-fec33240636a\") " Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.907436 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-config\") pod \"ff15fac5-dae0-423d-803b-bd3caad160ae\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.907462 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70a68a0-0a99-4cad-acf0-fec33240636a-catalog-content\") pod \"b70a68a0-0a99-4cad-acf0-fec33240636a\" (UID: \"b70a68a0-0a99-4cad-acf0-fec33240636a\") " Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.907483 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-dns-svc\") pod \"ff15fac5-dae0-423d-803b-bd3caad160ae\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.907504 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-ovsdbserver-sb\") pod \"ff15fac5-dae0-423d-803b-bd3caad160ae\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.907584 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70a68a0-0a99-4cad-acf0-fec33240636a-utilities\") pod \"b70a68a0-0a99-4cad-acf0-fec33240636a\" (UID: \"b70a68a0-0a99-4cad-acf0-fec33240636a\") " Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.907606 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wkbl\" (UniqueName: \"kubernetes.io/projected/ff15fac5-dae0-423d-803b-bd3caad160ae-kube-api-access-8wkbl\") pod \"ff15fac5-dae0-423d-803b-bd3caad160ae\" (UID: \"ff15fac5-dae0-423d-803b-bd3caad160ae\") " Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.908338 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70a68a0-0a99-4cad-acf0-fec33240636a-utilities" (OuterVolumeSpecName: "utilities") pod "b70a68a0-0a99-4cad-acf0-fec33240636a" (UID: "b70a68a0-0a99-4cad-acf0-fec33240636a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.908662 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b70a68a0-0a99-4cad-acf0-fec33240636a-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.913108 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70a68a0-0a99-4cad-acf0-fec33240636a-kube-api-access-7f4wv" (OuterVolumeSpecName: "kube-api-access-7f4wv") pod "b70a68a0-0a99-4cad-acf0-fec33240636a" (UID: "b70a68a0-0a99-4cad-acf0-fec33240636a"). InnerVolumeSpecName "kube-api-access-7f4wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.913462 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff15fac5-dae0-423d-803b-bd3caad160ae-kube-api-access-8wkbl" (OuterVolumeSpecName: "kube-api-access-8wkbl") pod "ff15fac5-dae0-423d-803b-bd3caad160ae" (UID: "ff15fac5-dae0-423d-803b-bd3caad160ae"). InnerVolumeSpecName "kube-api-access-8wkbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.947870 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff15fac5-dae0-423d-803b-bd3caad160ae" (UID: "ff15fac5-dae0-423d-803b-bd3caad160ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.947958 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff15fac5-dae0-423d-803b-bd3caad160ae" (UID: "ff15fac5-dae0-423d-803b-bd3caad160ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.952748 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-config" (OuterVolumeSpecName: "config") pod "ff15fac5-dae0-423d-803b-bd3caad160ae" (UID: "ff15fac5-dae0-423d-803b-bd3caad160ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:30:58 crc kubenswrapper[4769]: I1006 07:30:58.967011 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b70a68a0-0a99-4cad-acf0-fec33240636a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b70a68a0-0a99-4cad-acf0-fec33240636a" (UID: "b70a68a0-0a99-4cad-acf0-fec33240636a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.010544 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wkbl\" (UniqueName: \"kubernetes.io/projected/ff15fac5-dae0-423d-803b-bd3caad160ae-kube-api-access-8wkbl\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.010595 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f4wv\" (UniqueName: \"kubernetes.io/projected/b70a68a0-0a99-4cad-acf0-fec33240636a-kube-api-access-7f4wv\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.010608 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.010620 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b70a68a0-0a99-4cad-acf0-fec33240636a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.010631 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.010642 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff15fac5-dae0-423d-803b-bd3caad160ae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.401389 4769 generic.go:334] "Generic (PLEG): container finished" podID="b70a68a0-0a99-4cad-acf0-fec33240636a" containerID="b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e" exitCode=0 Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.401765 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkkb2" event={"ID":"b70a68a0-0a99-4cad-acf0-fec33240636a","Type":"ContainerDied","Data":"b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e"} Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.401795 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bkkb2" event={"ID":"b70a68a0-0a99-4cad-acf0-fec33240636a","Type":"ContainerDied","Data":"695a575bb43533c254a6df023b32fd3f400bef6d0b5a51259842ae9c1115aa1d"} Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.401813 4769 scope.go:117] "RemoveContainer" containerID="b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.401949 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bkkb2" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.405407 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff15fac5-dae0-423d-803b-bd3caad160ae" containerID="ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30" exitCode=0 Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.405568 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb69d786f-742h8" event={"ID":"ff15fac5-dae0-423d-803b-bd3caad160ae","Type":"ContainerDied","Data":"ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30"} Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.405594 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb69d786f-742h8" event={"ID":"ff15fac5-dae0-423d-803b-bd3caad160ae","Type":"ContainerDied","Data":"c17dca402377503c8c3aabaeae8566d0ebb22b1dd44639fe2460aa859dcc8b75"} Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.405646 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb69d786f-742h8" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.436620 4769 scope.go:117] "RemoveContainer" containerID="7e7e33a9eea803bdd876b5bb3db9e983998706df42e73b0de6f0c313195b8f00" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.438404 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bkkb2"] Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.463590 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bkkb2"] Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.473160 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb69d786f-742h8"] Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.482212 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cb69d786f-742h8"] Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.502707 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xdb5d"] Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.516521 4769 scope.go:117] "RemoveContainer" containerID="320b90eb0cf5d2173b12e726cf162e4aebbe21752277ed694801000b6a4018ae" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.531352 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.531409 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.553647 4769 scope.go:117] "RemoveContainer" containerID="b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e" Oct 06 07:30:59 crc kubenswrapper[4769]: E1006 07:30:59.557706 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e\": container with ID starting with b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e not found: ID does not exist" containerID="b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.557766 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e"} err="failed to get container status \"b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e\": rpc error: code = NotFound desc = could not find container \"b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e\": container with ID starting with b904dcfdc818324d4ce59fa81a3262f0ceefdfb2d392ce738194b24ae0360b6e not found: ID does not exist" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.557801 4769 scope.go:117] "RemoveContainer" containerID="7e7e33a9eea803bdd876b5bb3db9e983998706df42e73b0de6f0c313195b8f00" Oct 06 07:30:59 crc kubenswrapper[4769]: E1006 07:30:59.558297 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7e33a9eea803bdd876b5bb3db9e983998706df42e73b0de6f0c313195b8f00\": container with ID starting with 7e7e33a9eea803bdd876b5bb3db9e983998706df42e73b0de6f0c313195b8f00 not found: ID does not exist" containerID="7e7e33a9eea803bdd876b5bb3db9e983998706df42e73b0de6f0c313195b8f00" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.558370 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7e33a9eea803bdd876b5bb3db9e983998706df42e73b0de6f0c313195b8f00"} err="failed to get container status \"7e7e33a9eea803bdd876b5bb3db9e983998706df42e73b0de6f0c313195b8f00\": rpc error: code = NotFound desc = could not find container \"7e7e33a9eea803bdd876b5bb3db9e983998706df42e73b0de6f0c313195b8f00\": container with ID starting with 7e7e33a9eea803bdd876b5bb3db9e983998706df42e73b0de6f0c313195b8f00 not found: ID does not exist" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.558400 4769 scope.go:117] "RemoveContainer" containerID="320b90eb0cf5d2173b12e726cf162e4aebbe21752277ed694801000b6a4018ae" Oct 06 07:30:59 crc kubenswrapper[4769]: E1006 07:30:59.559713 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320b90eb0cf5d2173b12e726cf162e4aebbe21752277ed694801000b6a4018ae\": container with ID starting with 320b90eb0cf5d2173b12e726cf162e4aebbe21752277ed694801000b6a4018ae not found: ID does not exist" containerID="320b90eb0cf5d2173b12e726cf162e4aebbe21752277ed694801000b6a4018ae" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.559750 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320b90eb0cf5d2173b12e726cf162e4aebbe21752277ed694801000b6a4018ae"} err="failed to get container status \"320b90eb0cf5d2173b12e726cf162e4aebbe21752277ed694801000b6a4018ae\": rpc error: code = NotFound desc = could not find container \"320b90eb0cf5d2173b12e726cf162e4aebbe21752277ed694801000b6a4018ae\": container with ID starting with 320b90eb0cf5d2173b12e726cf162e4aebbe21752277ed694801000b6a4018ae not found: ID does not exist" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.559764 4769 scope.go:117] "RemoveContainer" containerID="ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.585210 4769 scope.go:117] "RemoveContainer" containerID="52ccb4bb9977f06e998827beed4e1e6379701b7ddb59c101e4bc3e66ed5f4b9a" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.607704 4769 scope.go:117] "RemoveContainer" containerID="ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30" Oct 06 07:30:59 crc kubenswrapper[4769]: E1006 07:30:59.611261 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30\": container with ID starting with ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30 not found: ID does not exist" containerID="ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.611302 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30"} err="failed to get container status \"ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30\": rpc error: code = NotFound desc = could not find container \"ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30\": container with ID starting with ecb22222bc0fe0bb1d55525f9451c47489be7c030e99eb82b2cf78b48425fc30 not found: ID does not exist" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.611329 4769 scope.go:117] "RemoveContainer" containerID="52ccb4bb9977f06e998827beed4e1e6379701b7ddb59c101e4bc3e66ed5f4b9a" Oct 06 07:30:59 crc kubenswrapper[4769]: E1006 07:30:59.611572 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52ccb4bb9977f06e998827beed4e1e6379701b7ddb59c101e4bc3e66ed5f4b9a\": container with ID starting with 52ccb4bb9977f06e998827beed4e1e6379701b7ddb59c101e4bc3e66ed5f4b9a not found: ID does not exist" containerID="52ccb4bb9977f06e998827beed4e1e6379701b7ddb59c101e4bc3e66ed5f4b9a" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.611624 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52ccb4bb9977f06e998827beed4e1e6379701b7ddb59c101e4bc3e66ed5f4b9a"} err="failed to get container status \"52ccb4bb9977f06e998827beed4e1e6379701b7ddb59c101e4bc3e66ed5f4b9a\": rpc error: code = NotFound desc = could not find container \"52ccb4bb9977f06e998827beed4e1e6379701b7ddb59c101e4bc3e66ed5f4b9a\": container with ID starting with 52ccb4bb9977f06e998827beed4e1e6379701b7ddb59c101e4bc3e66ed5f4b9a not found: ID does not exist" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.618230 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.697943 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.697976 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.771223 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.841712 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttqgb"] Oct 06 07:30:59 crc kubenswrapper[4769]: I1006 07:30:59.841952 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ttqgb" podUID="931992f0-da1a-4ed5-80fa-268fe083c2d8" containerName="registry-server" containerID="cri-o://ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023" gracePeriod=2 Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.181317 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b70a68a0-0a99-4cad-acf0-fec33240636a" path="/var/lib/kubelet/pods/b70a68a0-0a99-4cad-acf0-fec33240636a/volumes" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.182414 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff15fac5-dae0-423d-803b-bd3caad160ae" path="/var/lib/kubelet/pods/ff15fac5-dae0-423d-803b-bd3caad160ae/volumes" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.281776 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.333157 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931992f0-da1a-4ed5-80fa-268fe083c2d8-catalog-content\") pod \"931992f0-da1a-4ed5-80fa-268fe083c2d8\" (UID: \"931992f0-da1a-4ed5-80fa-268fe083c2d8\") " Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.333708 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931992f0-da1a-4ed5-80fa-268fe083c2d8-utilities\") pod \"931992f0-da1a-4ed5-80fa-268fe083c2d8\" (UID: \"931992f0-da1a-4ed5-80fa-268fe083c2d8\") " Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.333830 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn5d5\" (UniqueName: \"kubernetes.io/projected/931992f0-da1a-4ed5-80fa-268fe083c2d8-kube-api-access-pn5d5\") pod \"931992f0-da1a-4ed5-80fa-268fe083c2d8\" (UID: \"931992f0-da1a-4ed5-80fa-268fe083c2d8\") " Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.334410 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/931992f0-da1a-4ed5-80fa-268fe083c2d8-utilities" (OuterVolumeSpecName: "utilities") pod "931992f0-da1a-4ed5-80fa-268fe083c2d8" (UID: "931992f0-da1a-4ed5-80fa-268fe083c2d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.335071 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/931992f0-da1a-4ed5-80fa-268fe083c2d8-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.342559 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/931992f0-da1a-4ed5-80fa-268fe083c2d8-kube-api-access-pn5d5" (OuterVolumeSpecName: "kube-api-access-pn5d5") pod "931992f0-da1a-4ed5-80fa-268fe083c2d8" (UID: "931992f0-da1a-4ed5-80fa-268fe083c2d8"). InnerVolumeSpecName "kube-api-access-pn5d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.389915 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/931992f0-da1a-4ed5-80fa-268fe083c2d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "931992f0-da1a-4ed5-80fa-268fe083c2d8" (UID: "931992f0-da1a-4ed5-80fa-268fe083c2d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.413176 4769 generic.go:334] "Generic (PLEG): container finished" podID="931992f0-da1a-4ed5-80fa-268fe083c2d8" containerID="ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023" exitCode=0 Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.413226 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttqgb" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.413258 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttqgb" event={"ID":"931992f0-da1a-4ed5-80fa-268fe083c2d8","Type":"ContainerDied","Data":"ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023"} Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.413291 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttqgb" event={"ID":"931992f0-da1a-4ed5-80fa-268fe083c2d8","Type":"ContainerDied","Data":"504d11937a2857bbc8e154b137ee85348075e45397687f6065553f5abc12e62a"} Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.413312 4769 scope.go:117] "RemoveContainer" containerID="ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.437290 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn5d5\" (UniqueName: \"kubernetes.io/projected/931992f0-da1a-4ed5-80fa-268fe083c2d8-kube-api-access-pn5d5\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.437318 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/931992f0-da1a-4ed5-80fa-268fe083c2d8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.437362 4769 scope.go:117] "RemoveContainer" containerID="7c064873e5affebbe69ad3cad978142d767282c12faa193ddfceaf9ef33af497" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.444635 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttqgb"] Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.451250 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ttqgb"] Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.473902 4769 scope.go:117] "RemoveContainer" containerID="38aba12d0444453140dac888acb7223ece70b16201ea10d1fc2d4b702e20cdc6" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.495543 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.518830 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.527184 4769 scope.go:117] "RemoveContainer" containerID="ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023" Oct 06 07:31:00 crc kubenswrapper[4769]: E1006 07:31:00.527627 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023\": container with ID starting with ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023 not found: ID does not exist" containerID="ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.527659 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023"} err="failed to get container status \"ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023\": rpc error: code = NotFound desc = could not find container \"ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023\": container with ID starting with ad84224393109c3a508cdf8d301ec5a7307ee9cf133d2e4af2cd5ea249d48023 not found: ID does not exist" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.527678 4769 scope.go:117] "RemoveContainer" containerID="7c064873e5affebbe69ad3cad978142d767282c12faa193ddfceaf9ef33af497" Oct 06 07:31:00 crc kubenswrapper[4769]: E1006 07:31:00.528367 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c064873e5affebbe69ad3cad978142d767282c12faa193ddfceaf9ef33af497\": container with ID starting with 7c064873e5affebbe69ad3cad978142d767282c12faa193ddfceaf9ef33af497 not found: ID does not exist" containerID="7c064873e5affebbe69ad3cad978142d767282c12faa193ddfceaf9ef33af497" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.528390 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c064873e5affebbe69ad3cad978142d767282c12faa193ddfceaf9ef33af497"} err="failed to get container status \"7c064873e5affebbe69ad3cad978142d767282c12faa193ddfceaf9ef33af497\": rpc error: code = NotFound desc = could not find container \"7c064873e5affebbe69ad3cad978142d767282c12faa193ddfceaf9ef33af497\": container with ID starting with 7c064873e5affebbe69ad3cad978142d767282c12faa193ddfceaf9ef33af497 not found: ID does not exist" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.528402 4769 scope.go:117] "RemoveContainer" containerID="38aba12d0444453140dac888acb7223ece70b16201ea10d1fc2d4b702e20cdc6" Oct 06 07:31:00 crc kubenswrapper[4769]: E1006 07:31:00.528635 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38aba12d0444453140dac888acb7223ece70b16201ea10d1fc2d4b702e20cdc6\": container with ID starting with 38aba12d0444453140dac888acb7223ece70b16201ea10d1fc2d4b702e20cdc6 not found: ID does not exist" containerID="38aba12d0444453140dac888acb7223ece70b16201ea10d1fc2d4b702e20cdc6" Oct 06 07:31:00 crc kubenswrapper[4769]: I1006 07:31:00.528656 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38aba12d0444453140dac888acb7223ece70b16201ea10d1fc2d4b702e20cdc6"} err="failed to get container status \"38aba12d0444453140dac888acb7223ece70b16201ea10d1fc2d4b702e20cdc6\": rpc error: code = NotFound desc = could not find container \"38aba12d0444453140dac888acb7223ece70b16201ea10d1fc2d4b702e20cdc6\": container with ID starting with 38aba12d0444453140dac888acb7223ece70b16201ea10d1fc2d4b702e20cdc6 not found: ID does not exist" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.175009 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="931992f0-da1a-4ed5-80fa-268fe083c2d8" path="/var/lib/kubelet/pods/931992f0-da1a-4ed5-80fa-268fe083c2d8/volumes" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.200592 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c95b97d95-bbkf5"] Oct 06 07:31:02 crc kubenswrapper[4769]: E1006 07:31:02.200968 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff15fac5-dae0-423d-803b-bd3caad160ae" containerName="init" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.200989 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff15fac5-dae0-423d-803b-bd3caad160ae" containerName="init" Oct 06 07:31:02 crc kubenswrapper[4769]: E1006 07:31:02.201017 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70a68a0-0a99-4cad-acf0-fec33240636a" containerName="extract-utilities" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.201027 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70a68a0-0a99-4cad-acf0-fec33240636a" containerName="extract-utilities" Oct 06 07:31:02 crc kubenswrapper[4769]: E1006 07:31:02.201036 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff15fac5-dae0-423d-803b-bd3caad160ae" containerName="dnsmasq-dns" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.201043 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff15fac5-dae0-423d-803b-bd3caad160ae" containerName="dnsmasq-dns" Oct 06 07:31:02 crc kubenswrapper[4769]: E1006 07:31:02.201063 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="931992f0-da1a-4ed5-80fa-268fe083c2d8" containerName="extract-content" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.201070 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="931992f0-da1a-4ed5-80fa-268fe083c2d8" containerName="extract-content" Oct 06 07:31:02 crc kubenswrapper[4769]: E1006 07:31:02.201084 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70a68a0-0a99-4cad-acf0-fec33240636a" containerName="registry-server" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.201095 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70a68a0-0a99-4cad-acf0-fec33240636a" containerName="registry-server" Oct 06 07:31:02 crc kubenswrapper[4769]: E1006 07:31:02.201107 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="931992f0-da1a-4ed5-80fa-268fe083c2d8" containerName="extract-utilities" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.201114 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="931992f0-da1a-4ed5-80fa-268fe083c2d8" containerName="extract-utilities" Oct 06 07:31:02 crc kubenswrapper[4769]: E1006 07:31:02.201124 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70a68a0-0a99-4cad-acf0-fec33240636a" containerName="extract-content" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.201131 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70a68a0-0a99-4cad-acf0-fec33240636a" containerName="extract-content" Oct 06 07:31:02 crc kubenswrapper[4769]: E1006 07:31:02.201140 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="931992f0-da1a-4ed5-80fa-268fe083c2d8" containerName="registry-server" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.201147 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="931992f0-da1a-4ed5-80fa-268fe083c2d8" containerName="registry-server" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.201321 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="931992f0-da1a-4ed5-80fa-268fe083c2d8" containerName="registry-server" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.201335 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70a68a0-0a99-4cad-acf0-fec33240636a" containerName="registry-server" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.201350 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff15fac5-dae0-423d-803b-bd3caad160ae" containerName="dnsmasq-dns" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.202365 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.239893 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c95b97d95-bbkf5"] Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.272036 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-config\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.272082 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.272108 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.272140 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgt24\" (UniqueName: \"kubernetes.io/projected/7f947e31-bd5d-4571-a79f-85805377370c-kube-api-access-kgt24\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.272176 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-dns-svc\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.373350 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-config\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.373404 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.373444 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.373479 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgt24\" (UniqueName: \"kubernetes.io/projected/7f947e31-bd5d-4571-a79f-85805377370c-kube-api-access-kgt24\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.373510 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-dns-svc\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.374883 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-dns-svc\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.375460 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-config\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.376622 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.376894 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.402146 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgt24\" (UniqueName: \"kubernetes.io/projected/7f947e31-bd5d-4571-a79f-85805377370c-kube-api-access-kgt24\") pod \"dnsmasq-dns-5c95b97d95-bbkf5\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:02 crc kubenswrapper[4769]: I1006 07:31:02.521042 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.087227 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c95b97d95-bbkf5"] Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.375410 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.381094 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.384001 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-tq7wb" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.384830 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.384887 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.386636 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.401643 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.451650 4769 generic.go:334] "Generic (PLEG): container finished" podID="7f947e31-bd5d-4571-a79f-85805377370c" containerID="6ec917a318228be89b91f7dc5dd5e5320f5d078072b875382572a0baed4c61e8" exitCode=0 Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.451696 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" event={"ID":"7f947e31-bd5d-4571-a79f-85805377370c","Type":"ContainerDied","Data":"6ec917a318228be89b91f7dc5dd5e5320f5d078072b875382572a0baed4c61e8"} Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.451726 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" event={"ID":"7f947e31-bd5d-4571-a79f-85805377370c","Type":"ContainerStarted","Data":"f7d11daa23d21e80789e9006a0a5892d9f16f3c7a0f9f0264fb081b219586cab"} Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.489089 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.489150 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-cache\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.489186 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-lock\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.489204 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mglpf\" (UniqueName: \"kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-kube-api-access-mglpf\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.489268 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.591126 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-cache\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.591261 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-lock\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.591302 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mglpf\" (UniqueName: \"kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-kube-api-access-mglpf\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.591369 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.591769 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.591784 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-lock\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: E1006 07:31:03.591652 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 07:31:03 crc kubenswrapper[4769]: E1006 07:31:03.591849 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.591761 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-cache\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: E1006 07:31:03.591909 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift podName:91c9fe86-3c6f-485c-94a9-5adc4a88d14f nodeName:}" failed. No retries permitted until 2025-10-06 07:31:04.091889684 +0000 UTC m=+860.616170831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift") pod "swift-storage-0" (UID: "91c9fe86-3c6f-485c-94a9-5adc4a88d14f") : configmap "swift-ring-files" not found Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.592118 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.593792 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cb69d786f-742h8" podUID="ff15fac5-dae0-423d-803b-bd3caad160ae" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.610263 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mglpf\" (UniqueName: \"kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-kube-api-access-mglpf\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.622014 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.914007 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gtdqz"] Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.915394 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.918291 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.918363 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.918402 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.930938 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gtdqz"] Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.979275 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5dtjw"] Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.980329 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.986380 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-gtdqz"] Oct 06 07:31:03 crc kubenswrapper[4769]: E1006 07:31:03.986954 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-zh9g2 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-zh9g2 ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-gtdqz" podUID="a05dddf6-489a-4d6c-b191-a7b6f239a028" Oct 06 07:31:03 crc kubenswrapper[4769]: I1006 07:31:03.995710 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5dtjw"] Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100294 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/565d366a-8723-4b82-8b01-cd4d05b66e18-scripts\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100366 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a05dddf6-489a-4d6c-b191-a7b6f239a028-etc-swift\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100409 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh9g2\" (UniqueName: \"kubernetes.io/projected/a05dddf6-489a-4d6c-b191-a7b6f239a028-kube-api-access-zh9g2\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100461 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/565d366a-8723-4b82-8b01-cd4d05b66e18-ring-data-devices\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100488 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-combined-ca-bundle\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100540 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100571 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-swiftconf\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100596 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/565d366a-8723-4b82-8b01-cd4d05b66e18-etc-swift\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100635 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-dispersionconf\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100671 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a05dddf6-489a-4d6c-b191-a7b6f239a028-ring-data-devices\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100700 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05dddf6-489a-4d6c-b191-a7b6f239a028-scripts\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100723 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-swiftconf\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100750 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-combined-ca-bundle\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100771 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdjft\" (UniqueName: \"kubernetes.io/projected/565d366a-8723-4b82-8b01-cd4d05b66e18-kube-api-access-pdjft\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.100796 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-dispersionconf\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: E1006 07:31:04.101042 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 07:31:04 crc kubenswrapper[4769]: E1006 07:31:04.101060 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 07:31:04 crc kubenswrapper[4769]: E1006 07:31:04.101107 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift podName:91c9fe86-3c6f-485c-94a9-5adc4a88d14f nodeName:}" failed. No retries permitted until 2025-10-06 07:31:05.10108887 +0000 UTC m=+861.625370017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift") pod "swift-storage-0" (UID: "91c9fe86-3c6f-485c-94a9-5adc4a88d14f") : configmap "swift-ring-files" not found Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202053 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/565d366a-8723-4b82-8b01-cd4d05b66e18-ring-data-devices\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202114 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-combined-ca-bundle\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202195 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-swiftconf\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202221 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/565d366a-8723-4b82-8b01-cd4d05b66e18-etc-swift\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202259 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-dispersionconf\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202295 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a05dddf6-489a-4d6c-b191-a7b6f239a028-ring-data-devices\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202315 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05dddf6-489a-4d6c-b191-a7b6f239a028-scripts\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202335 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-swiftconf\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202357 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdjft\" (UniqueName: \"kubernetes.io/projected/565d366a-8723-4b82-8b01-cd4d05b66e18-kube-api-access-pdjft\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202381 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-combined-ca-bundle\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202406 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-dispersionconf\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202468 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/565d366a-8723-4b82-8b01-cd4d05b66e18-scripts\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202506 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a05dddf6-489a-4d6c-b191-a7b6f239a028-etc-swift\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.202539 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh9g2\" (UniqueName: \"kubernetes.io/projected/a05dddf6-489a-4d6c-b191-a7b6f239a028-kube-api-access-zh9g2\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.203093 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/565d366a-8723-4b82-8b01-cd4d05b66e18-ring-data-devices\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.204053 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05dddf6-489a-4d6c-b191-a7b6f239a028-scripts\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.204086 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a05dddf6-489a-4d6c-b191-a7b6f239a028-ring-data-devices\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.204622 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/565d366a-8723-4b82-8b01-cd4d05b66e18-scripts\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.204884 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a05dddf6-489a-4d6c-b191-a7b6f239a028-etc-swift\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.205144 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/565d366a-8723-4b82-8b01-cd4d05b66e18-etc-swift\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.209978 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-dispersionconf\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.210908 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-combined-ca-bundle\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.211161 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-combined-ca-bundle\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.211649 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-swiftconf\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.212047 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-swiftconf\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.218808 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-dispersionconf\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.229054 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh9g2\" (UniqueName: \"kubernetes.io/projected/a05dddf6-489a-4d6c-b191-a7b6f239a028-kube-api-access-zh9g2\") pod \"swift-ring-rebalance-gtdqz\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.235031 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdjft\" (UniqueName: \"kubernetes.io/projected/565d366a-8723-4b82-8b01-cd4d05b66e18-kube-api-access-pdjft\") pod \"swift-ring-rebalance-5dtjw\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.298948 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.458468 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.473326 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.614876 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a05dddf6-489a-4d6c-b191-a7b6f239a028-ring-data-devices\") pod \"a05dddf6-489a-4d6c-b191-a7b6f239a028\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.615002 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a05dddf6-489a-4d6c-b191-a7b6f239a028-etc-swift\") pod \"a05dddf6-489a-4d6c-b191-a7b6f239a028\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.615099 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh9g2\" (UniqueName: \"kubernetes.io/projected/a05dddf6-489a-4d6c-b191-a7b6f239a028-kube-api-access-zh9g2\") pod \"a05dddf6-489a-4d6c-b191-a7b6f239a028\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.615133 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-combined-ca-bundle\") pod \"a05dddf6-489a-4d6c-b191-a7b6f239a028\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.615158 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-swiftconf\") pod \"a05dddf6-489a-4d6c-b191-a7b6f239a028\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.615185 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-dispersionconf\") pod \"a05dddf6-489a-4d6c-b191-a7b6f239a028\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.615227 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05dddf6-489a-4d6c-b191-a7b6f239a028-scripts\") pod \"a05dddf6-489a-4d6c-b191-a7b6f239a028\" (UID: \"a05dddf6-489a-4d6c-b191-a7b6f239a028\") " Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.615442 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a05dddf6-489a-4d6c-b191-a7b6f239a028-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a05dddf6-489a-4d6c-b191-a7b6f239a028" (UID: "a05dddf6-489a-4d6c-b191-a7b6f239a028"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.615459 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05dddf6-489a-4d6c-b191-a7b6f239a028-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a05dddf6-489a-4d6c-b191-a7b6f239a028" (UID: "a05dddf6-489a-4d6c-b191-a7b6f239a028"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.615674 4769 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a05dddf6-489a-4d6c-b191-a7b6f239a028-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.615697 4769 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a05dddf6-489a-4d6c-b191-a7b6f239a028-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.615844 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a05dddf6-489a-4d6c-b191-a7b6f239a028-scripts" (OuterVolumeSpecName: "scripts") pod "a05dddf6-489a-4d6c-b191-a7b6f239a028" (UID: "a05dddf6-489a-4d6c-b191-a7b6f239a028"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.620373 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a05dddf6-489a-4d6c-b191-a7b6f239a028" (UID: "a05dddf6-489a-4d6c-b191-a7b6f239a028"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.620330 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a05dddf6-489a-4d6c-b191-a7b6f239a028" (UID: "a05dddf6-489a-4d6c-b191-a7b6f239a028"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.620446 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a05dddf6-489a-4d6c-b191-a7b6f239a028-kube-api-access-zh9g2" (OuterVolumeSpecName: "kube-api-access-zh9g2") pod "a05dddf6-489a-4d6c-b191-a7b6f239a028" (UID: "a05dddf6-489a-4d6c-b191-a7b6f239a028"). InnerVolumeSpecName "kube-api-access-zh9g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.620795 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a05dddf6-489a-4d6c-b191-a7b6f239a028" (UID: "a05dddf6-489a-4d6c-b191-a7b6f239a028"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.716823 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh9g2\" (UniqueName: \"kubernetes.io/projected/a05dddf6-489a-4d6c-b191-a7b6f239a028-kube-api-access-zh9g2\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.716865 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.716877 4769 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.716889 4769 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a05dddf6-489a-4d6c-b191-a7b6f239a028-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.716900 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a05dddf6-489a-4d6c-b191-a7b6f239a028-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:04 crc kubenswrapper[4769]: I1006 07:31:04.757572 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5dtjw"] Oct 06 07:31:04 crc kubenswrapper[4769]: W1006 07:31:04.760150 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod565d366a_8723_4b82_8b01_cd4d05b66e18.slice/crio-798ce33517caefce797fdc06919ac8ae790b1d796e5c2a9e352a841a36178199 WatchSource:0}: Error finding container 798ce33517caefce797fdc06919ac8ae790b1d796e5c2a9e352a841a36178199: Status 404 returned error can't find the container with id 798ce33517caefce797fdc06919ac8ae790b1d796e5c2a9e352a841a36178199 Oct 06 07:31:05 crc kubenswrapper[4769]: I1006 07:31:05.122389 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:05 crc kubenswrapper[4769]: E1006 07:31:05.122571 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 07:31:05 crc kubenswrapper[4769]: E1006 07:31:05.122584 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 07:31:05 crc kubenswrapper[4769]: E1006 07:31:05.122621 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift podName:91c9fe86-3c6f-485c-94a9-5adc4a88d14f nodeName:}" failed. No retries permitted until 2025-10-06 07:31:07.122610017 +0000 UTC m=+863.646891154 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift") pod "swift-storage-0" (UID: "91c9fe86-3c6f-485c-94a9-5adc4a88d14f") : configmap "swift-ring-files" not found Oct 06 07:31:05 crc kubenswrapper[4769]: I1006 07:31:05.382952 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-prfnk"] Oct 06 07:31:05 crc kubenswrapper[4769]: I1006 07:31:05.385385 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-prfnk" Oct 06 07:31:05 crc kubenswrapper[4769]: I1006 07:31:05.395493 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-prfnk"] Oct 06 07:31:05 crc kubenswrapper[4769]: I1006 07:31:05.465609 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5dtjw" event={"ID":"565d366a-8723-4b82-8b01-cd4d05b66e18","Type":"ContainerStarted","Data":"798ce33517caefce797fdc06919ac8ae790b1d796e5c2a9e352a841a36178199"} Oct 06 07:31:05 crc kubenswrapper[4769]: I1006 07:31:05.465656 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gtdqz" Oct 06 07:31:05 crc kubenswrapper[4769]: I1006 07:31:05.506165 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-gtdqz"] Oct 06 07:31:05 crc kubenswrapper[4769]: I1006 07:31:05.512845 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-gtdqz"] Oct 06 07:31:05 crc kubenswrapper[4769]: I1006 07:31:05.528051 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ckp4\" (UniqueName: \"kubernetes.io/projected/8f13035c-17a8-4de2-b9c6-31e517b47675-kube-api-access-2ckp4\") pod \"glance-db-create-prfnk\" (UID: \"8f13035c-17a8-4de2-b9c6-31e517b47675\") " pod="openstack/glance-db-create-prfnk" Oct 06 07:31:05 crc kubenswrapper[4769]: I1006 07:31:05.629404 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ckp4\" (UniqueName: \"kubernetes.io/projected/8f13035c-17a8-4de2-b9c6-31e517b47675-kube-api-access-2ckp4\") pod \"glance-db-create-prfnk\" (UID: \"8f13035c-17a8-4de2-b9c6-31e517b47675\") " pod="openstack/glance-db-create-prfnk" Oct 06 07:31:05 crc kubenswrapper[4769]: I1006 07:31:05.651948 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ckp4\" (UniqueName: \"kubernetes.io/projected/8f13035c-17a8-4de2-b9c6-31e517b47675-kube-api-access-2ckp4\") pod \"glance-db-create-prfnk\" (UID: \"8f13035c-17a8-4de2-b9c6-31e517b47675\") " pod="openstack/glance-db-create-prfnk" Oct 06 07:31:05 crc kubenswrapper[4769]: I1006 07:31:05.705679 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-prfnk" Oct 06 07:31:06 crc kubenswrapper[4769]: I1006 07:31:06.133141 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-prfnk"] Oct 06 07:31:06 crc kubenswrapper[4769]: W1006 07:31:06.137856 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f13035c_17a8_4de2_b9c6_31e517b47675.slice/crio-270346dc75e7d0792c0c5f67b6fa23c21f0865ad7cd20b56953f303810f06926 WatchSource:0}: Error finding container 270346dc75e7d0792c0c5f67b6fa23c21f0865ad7cd20b56953f303810f06926: Status 404 returned error can't find the container with id 270346dc75e7d0792c0c5f67b6fa23c21f0865ad7cd20b56953f303810f06926 Oct 06 07:31:06 crc kubenswrapper[4769]: I1006 07:31:06.175522 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a05dddf6-489a-4d6c-b191-a7b6f239a028" path="/var/lib/kubelet/pods/a05dddf6-489a-4d6c-b191-a7b6f239a028/volumes" Oct 06 07:31:06 crc kubenswrapper[4769]: I1006 07:31:06.472764 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-prfnk" event={"ID":"8f13035c-17a8-4de2-b9c6-31e517b47675","Type":"ContainerStarted","Data":"270346dc75e7d0792c0c5f67b6fa23c21f0865ad7cd20b56953f303810f06926"} Oct 06 07:31:07 crc kubenswrapper[4769]: I1006 07:31:07.152078 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:07 crc kubenswrapper[4769]: E1006 07:31:07.152322 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 07:31:07 crc kubenswrapper[4769]: E1006 07:31:07.152349 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 07:31:07 crc kubenswrapper[4769]: E1006 07:31:07.152444 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift podName:91c9fe86-3c6f-485c-94a9-5adc4a88d14f nodeName:}" failed. No retries permitted until 2025-10-06 07:31:11.152401148 +0000 UTC m=+867.676682315 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift") pod "swift-storage-0" (UID: "91c9fe86-3c6f-485c-94a9-5adc4a88d14f") : configmap "swift-ring-files" not found Oct 06 07:31:08 crc kubenswrapper[4769]: I1006 07:31:08.184834 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 06 07:31:09 crc kubenswrapper[4769]: I1006 07:31:09.503721 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" event={"ID":"7f947e31-bd5d-4571-a79f-85805377370c","Type":"ContainerStarted","Data":"89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500"} Oct 06 07:31:09 crc kubenswrapper[4769]: I1006 07:31:09.733517 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-dnxtk"] Oct 06 07:31:09 crc kubenswrapper[4769]: I1006 07:31:09.734483 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dnxtk" Oct 06 07:31:09 crc kubenswrapper[4769]: I1006 07:31:09.751212 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dnxtk"] Oct 06 07:31:09 crc kubenswrapper[4769]: I1006 07:31:09.794913 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28lwk\" (UniqueName: \"kubernetes.io/projected/924017e0-5e67-41ee-a60a-48d5c71da1cc-kube-api-access-28lwk\") pod \"keystone-db-create-dnxtk\" (UID: \"924017e0-5e67-41ee-a60a-48d5c71da1cc\") " pod="openstack/keystone-db-create-dnxtk" Oct 06 07:31:09 crc kubenswrapper[4769]: I1006 07:31:09.896484 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28lwk\" (UniqueName: \"kubernetes.io/projected/924017e0-5e67-41ee-a60a-48d5c71da1cc-kube-api-access-28lwk\") pod \"keystone-db-create-dnxtk\" (UID: \"924017e0-5e67-41ee-a60a-48d5c71da1cc\") " pod="openstack/keystone-db-create-dnxtk" Oct 06 07:31:09 crc kubenswrapper[4769]: I1006 07:31:09.917700 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28lwk\" (UniqueName: \"kubernetes.io/projected/924017e0-5e67-41ee-a60a-48d5c71da1cc-kube-api-access-28lwk\") pod \"keystone-db-create-dnxtk\" (UID: \"924017e0-5e67-41ee-a60a-48d5c71da1cc\") " pod="openstack/keystone-db-create-dnxtk" Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.036821 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nxmsq"] Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.038114 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nxmsq" Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.047713 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nxmsq"] Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.056047 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dnxtk" Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.202248 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc97l\" (UniqueName: \"kubernetes.io/projected/1e3f9e58-995c-4420-94bb-5e9672b469b7-kube-api-access-kc97l\") pod \"placement-db-create-nxmsq\" (UID: \"1e3f9e58-995c-4420-94bb-5e9672b469b7\") " pod="openstack/placement-db-create-nxmsq" Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.304511 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc97l\" (UniqueName: \"kubernetes.io/projected/1e3f9e58-995c-4420-94bb-5e9672b469b7-kube-api-access-kc97l\") pod \"placement-db-create-nxmsq\" (UID: \"1e3f9e58-995c-4420-94bb-5e9672b469b7\") " pod="openstack/placement-db-create-nxmsq" Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.326393 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc97l\" (UniqueName: \"kubernetes.io/projected/1e3f9e58-995c-4420-94bb-5e9672b469b7-kube-api-access-kc97l\") pod \"placement-db-create-nxmsq\" (UID: \"1e3f9e58-995c-4420-94bb-5e9672b469b7\") " pod="openstack/placement-db-create-nxmsq" Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.360041 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nxmsq" Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.499868 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dnxtk"] Oct 06 07:31:10 crc kubenswrapper[4769]: W1006 07:31:10.515575 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod924017e0_5e67_41ee_a60a_48d5c71da1cc.slice/crio-549fdafcef71b122946118df81291ef625252f68b38730704eb2a24d683281ab WatchSource:0}: Error finding container 549fdafcef71b122946118df81291ef625252f68b38730704eb2a24d683281ab: Status 404 returned error can't find the container with id 549fdafcef71b122946118df81291ef625252f68b38730704eb2a24d683281ab Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.516024 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-prfnk" event={"ID":"8f13035c-17a8-4de2-b9c6-31e517b47675","Type":"ContainerStarted","Data":"6d305a070a689b91a0dacb908c774ed796da5a77b9b086ff52dbd65e2d3db2c7"} Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.516099 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.556713 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-prfnk" podStartSLOduration=5.556655024 podStartE2EDuration="5.556655024s" podCreationTimestamp="2025-10-06 07:31:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:31:10.553613941 +0000 UTC m=+867.077895088" watchObservedRunningTime="2025-10-06 07:31:10.556655024 +0000 UTC m=+867.080936171" Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.566199 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" podStartSLOduration=8.566174243 podStartE2EDuration="8.566174243s" podCreationTimestamp="2025-10-06 07:31:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:31:10.542924538 +0000 UTC m=+867.067205685" watchObservedRunningTime="2025-10-06 07:31:10.566174243 +0000 UTC m=+867.090455420" Oct 06 07:31:10 crc kubenswrapper[4769]: I1006 07:31:10.786175 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nxmsq"] Oct 06 07:31:10 crc kubenswrapper[4769]: W1006 07:31:10.801385 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e3f9e58_995c_4420_94bb_5e9672b469b7.slice/crio-f8fa2d8df903ab2e27970f279561f2ed8eaeb23c5d15d2cc033495b15d9d9f9a WatchSource:0}: Error finding container f8fa2d8df903ab2e27970f279561f2ed8eaeb23c5d15d2cc033495b15d9d9f9a: Status 404 returned error can't find the container with id f8fa2d8df903ab2e27970f279561f2ed8eaeb23c5d15d2cc033495b15d9d9f9a Oct 06 07:31:11 crc kubenswrapper[4769]: I1006 07:31:11.221471 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:11 crc kubenswrapper[4769]: E1006 07:31:11.221679 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 07:31:11 crc kubenswrapper[4769]: E1006 07:31:11.221848 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 07:31:11 crc kubenswrapper[4769]: E1006 07:31:11.221904 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift podName:91c9fe86-3c6f-485c-94a9-5adc4a88d14f nodeName:}" failed. No retries permitted until 2025-10-06 07:31:19.221888563 +0000 UTC m=+875.746169710 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift") pod "swift-storage-0" (UID: "91c9fe86-3c6f-485c-94a9-5adc4a88d14f") : configmap "swift-ring-files" not found Oct 06 07:31:11 crc kubenswrapper[4769]: I1006 07:31:11.524871 4769 generic.go:334] "Generic (PLEG): container finished" podID="924017e0-5e67-41ee-a60a-48d5c71da1cc" containerID="046c3794949f7e4ce6872ad9f332a14039594222883b133d36400baacd1cb241" exitCode=0 Oct 06 07:31:11 crc kubenswrapper[4769]: I1006 07:31:11.524983 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dnxtk" event={"ID":"924017e0-5e67-41ee-a60a-48d5c71da1cc","Type":"ContainerDied","Data":"046c3794949f7e4ce6872ad9f332a14039594222883b133d36400baacd1cb241"} Oct 06 07:31:11 crc kubenswrapper[4769]: I1006 07:31:11.525009 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dnxtk" event={"ID":"924017e0-5e67-41ee-a60a-48d5c71da1cc","Type":"ContainerStarted","Data":"549fdafcef71b122946118df81291ef625252f68b38730704eb2a24d683281ab"} Oct 06 07:31:11 crc kubenswrapper[4769]: I1006 07:31:11.526294 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nxmsq" event={"ID":"1e3f9e58-995c-4420-94bb-5e9672b469b7","Type":"ContainerStarted","Data":"72dba0e6f3893968b96cd0ad7f16f3f937382ca65e231ece87350ba09bcd4116"} Oct 06 07:31:11 crc kubenswrapper[4769]: I1006 07:31:11.526340 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nxmsq" event={"ID":"1e3f9e58-995c-4420-94bb-5e9672b469b7","Type":"ContainerStarted","Data":"f8fa2d8df903ab2e27970f279561f2ed8eaeb23c5d15d2cc033495b15d9d9f9a"} Oct 06 07:31:11 crc kubenswrapper[4769]: I1006 07:31:11.532209 4769 generic.go:334] "Generic (PLEG): container finished" podID="8f13035c-17a8-4de2-b9c6-31e517b47675" containerID="6d305a070a689b91a0dacb908c774ed796da5a77b9b086ff52dbd65e2d3db2c7" exitCode=0 Oct 06 07:31:11 crc kubenswrapper[4769]: I1006 07:31:11.532316 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-prfnk" event={"ID":"8f13035c-17a8-4de2-b9c6-31e517b47675","Type":"ContainerDied","Data":"6d305a070a689b91a0dacb908c774ed796da5a77b9b086ff52dbd65e2d3db2c7"} Oct 06 07:31:11 crc kubenswrapper[4769]: I1006 07:31:11.555734 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-nxmsq" podStartSLOduration=1.555715466 podStartE2EDuration="1.555715466s" podCreationTimestamp="2025-10-06 07:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:31:11.549287611 +0000 UTC m=+868.073568758" watchObservedRunningTime="2025-10-06 07:31:11.555715466 +0000 UTC m=+868.079996613" Oct 06 07:31:12 crc kubenswrapper[4769]: I1006 07:31:12.541219 4769 generic.go:334] "Generic (PLEG): container finished" podID="1e3f9e58-995c-4420-94bb-5e9672b469b7" containerID="72dba0e6f3893968b96cd0ad7f16f3f937382ca65e231ece87350ba09bcd4116" exitCode=0 Oct 06 07:31:12 crc kubenswrapper[4769]: I1006 07:31:12.541402 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nxmsq" event={"ID":"1e3f9e58-995c-4420-94bb-5e9672b469b7","Type":"ContainerDied","Data":"72dba0e6f3893968b96cd0ad7f16f3f937382ca65e231ece87350ba09bcd4116"} Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.026281 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dnxtk" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.031529 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nxmsq" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.062382 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-prfnk" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.227885 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28lwk\" (UniqueName: \"kubernetes.io/projected/924017e0-5e67-41ee-a60a-48d5c71da1cc-kube-api-access-28lwk\") pod \"924017e0-5e67-41ee-a60a-48d5c71da1cc\" (UID: \"924017e0-5e67-41ee-a60a-48d5c71da1cc\") " Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.228080 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kc97l\" (UniqueName: \"kubernetes.io/projected/1e3f9e58-995c-4420-94bb-5e9672b469b7-kube-api-access-kc97l\") pod \"1e3f9e58-995c-4420-94bb-5e9672b469b7\" (UID: \"1e3f9e58-995c-4420-94bb-5e9672b469b7\") " Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.228133 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ckp4\" (UniqueName: \"kubernetes.io/projected/8f13035c-17a8-4de2-b9c6-31e517b47675-kube-api-access-2ckp4\") pod \"8f13035c-17a8-4de2-b9c6-31e517b47675\" (UID: \"8f13035c-17a8-4de2-b9c6-31e517b47675\") " Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.232345 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f13035c-17a8-4de2-b9c6-31e517b47675-kube-api-access-2ckp4" (OuterVolumeSpecName: "kube-api-access-2ckp4") pod "8f13035c-17a8-4de2-b9c6-31e517b47675" (UID: "8f13035c-17a8-4de2-b9c6-31e517b47675"). InnerVolumeSpecName "kube-api-access-2ckp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.238604 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/924017e0-5e67-41ee-a60a-48d5c71da1cc-kube-api-access-28lwk" (OuterVolumeSpecName: "kube-api-access-28lwk") pod "924017e0-5e67-41ee-a60a-48d5c71da1cc" (UID: "924017e0-5e67-41ee-a60a-48d5c71da1cc"). InnerVolumeSpecName "kube-api-access-28lwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.253609 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3f9e58-995c-4420-94bb-5e9672b469b7-kube-api-access-kc97l" (OuterVolumeSpecName: "kube-api-access-kc97l") pod "1e3f9e58-995c-4420-94bb-5e9672b469b7" (UID: "1e3f9e58-995c-4420-94bb-5e9672b469b7"). InnerVolumeSpecName "kube-api-access-kc97l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.329961 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kc97l\" (UniqueName: \"kubernetes.io/projected/1e3f9e58-995c-4420-94bb-5e9672b469b7-kube-api-access-kc97l\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.330004 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ckp4\" (UniqueName: \"kubernetes.io/projected/8f13035c-17a8-4de2-b9c6-31e517b47675-kube-api-access-2ckp4\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.330019 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28lwk\" (UniqueName: \"kubernetes.io/projected/924017e0-5e67-41ee-a60a-48d5c71da1cc-kube-api-access-28lwk\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.567397 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-prfnk" event={"ID":"8f13035c-17a8-4de2-b9c6-31e517b47675","Type":"ContainerDied","Data":"270346dc75e7d0792c0c5f67b6fa23c21f0865ad7cd20b56953f303810f06926"} Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.567462 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="270346dc75e7d0792c0c5f67b6fa23c21f0865ad7cd20b56953f303810f06926" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.567544 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-prfnk" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.570906 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5dtjw" event={"ID":"565d366a-8723-4b82-8b01-cd4d05b66e18","Type":"ContainerStarted","Data":"b22983abb9e17f7fffda637b0a8678e625cd5a4aad3a4b6166bd1e5f2ac61c1e"} Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.576182 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dnxtk" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.578837 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nxmsq" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.574413 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dnxtk" event={"ID":"924017e0-5e67-41ee-a60a-48d5c71da1cc","Type":"ContainerDied","Data":"549fdafcef71b122946118df81291ef625252f68b38730704eb2a24d683281ab"} Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.583603 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="549fdafcef71b122946118df81291ef625252f68b38730704eb2a24d683281ab" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.583632 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nxmsq" event={"ID":"1e3f9e58-995c-4420-94bb-5e9672b469b7","Type":"ContainerDied","Data":"f8fa2d8df903ab2e27970f279561f2ed8eaeb23c5d15d2cc033495b15d9d9f9a"} Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.583648 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8fa2d8df903ab2e27970f279561f2ed8eaeb23c5d15d2cc033495b15d9d9f9a" Oct 06 07:31:14 crc kubenswrapper[4769]: I1006 07:31:14.599178 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-5dtjw" podStartSLOduration=2.449240114 podStartE2EDuration="11.599160971s" podCreationTimestamp="2025-10-06 07:31:03 +0000 UTC" firstStartedPulling="2025-10-06 07:31:04.763353769 +0000 UTC m=+861.287634916" lastFinishedPulling="2025-10-06 07:31:13.913274626 +0000 UTC m=+870.437555773" observedRunningTime="2025-10-06 07:31:14.592361154 +0000 UTC m=+871.116642301" watchObservedRunningTime="2025-10-06 07:31:14.599160971 +0000 UTC m=+871.123442118" Oct 06 07:31:17 crc kubenswrapper[4769]: I1006 07:31:17.522552 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:31:17 crc kubenswrapper[4769]: I1006 07:31:17.595788 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78c4cb665f-mg486"] Oct 06 07:31:17 crc kubenswrapper[4769]: I1006 07:31:17.596062 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" podUID="61ee5649-5628-4caf-af79-6b6f6b944c79" containerName="dnsmasq-dns" containerID="cri-o://97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce" gracePeriod=10 Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.150303 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.306869 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-config\") pod \"61ee5649-5628-4caf-af79-6b6f6b944c79\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.307040 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfx6t\" (UniqueName: \"kubernetes.io/projected/61ee5649-5628-4caf-af79-6b6f6b944c79-kube-api-access-xfx6t\") pod \"61ee5649-5628-4caf-af79-6b6f6b944c79\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.307104 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-ovsdbserver-nb\") pod \"61ee5649-5628-4caf-af79-6b6f6b944c79\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.307260 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-ovsdbserver-sb\") pod \"61ee5649-5628-4caf-af79-6b6f6b944c79\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.307650 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-dns-svc\") pod \"61ee5649-5628-4caf-af79-6b6f6b944c79\" (UID: \"61ee5649-5628-4caf-af79-6b6f6b944c79\") " Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.315605 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61ee5649-5628-4caf-af79-6b6f6b944c79-kube-api-access-xfx6t" (OuterVolumeSpecName: "kube-api-access-xfx6t") pod "61ee5649-5628-4caf-af79-6b6f6b944c79" (UID: "61ee5649-5628-4caf-af79-6b6f6b944c79"). InnerVolumeSpecName "kube-api-access-xfx6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.349356 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "61ee5649-5628-4caf-af79-6b6f6b944c79" (UID: "61ee5649-5628-4caf-af79-6b6f6b944c79"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.350494 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "61ee5649-5628-4caf-af79-6b6f6b944c79" (UID: "61ee5649-5628-4caf-af79-6b6f6b944c79"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.352879 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "61ee5649-5628-4caf-af79-6b6f6b944c79" (UID: "61ee5649-5628-4caf-af79-6b6f6b944c79"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.354901 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-config" (OuterVolumeSpecName: "config") pod "61ee5649-5628-4caf-af79-6b6f6b944c79" (UID: "61ee5649-5628-4caf-af79-6b6f6b944c79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.410003 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.410037 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.410046 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.410054 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ee5649-5628-4caf-af79-6b6f6b944c79-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.410065 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfx6t\" (UniqueName: \"kubernetes.io/projected/61ee5649-5628-4caf-af79-6b6f6b944c79-kube-api-access-xfx6t\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.623329 4769 generic.go:334] "Generic (PLEG): container finished" podID="61ee5649-5628-4caf-af79-6b6f6b944c79" containerID="97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce" exitCode=0 Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.623405 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.623452 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" event={"ID":"61ee5649-5628-4caf-af79-6b6f6b944c79","Type":"ContainerDied","Data":"97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce"} Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.624649 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c4cb665f-mg486" event={"ID":"61ee5649-5628-4caf-af79-6b6f6b944c79","Type":"ContainerDied","Data":"ecc58fb541af88abd939c6eda51c4d5fd9fe337ce5aca1a662d26b88125b9290"} Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.624732 4769 scope.go:117] "RemoveContainer" containerID="97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.641798 4769 scope.go:117] "RemoveContainer" containerID="a5838fce251c886058c1953797d392ea4195046e3c7977064d2a950f64013108" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.655547 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78c4cb665f-mg486"] Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.668368 4769 scope.go:117] "RemoveContainer" containerID="97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce" Oct 06 07:31:18 crc kubenswrapper[4769]: E1006 07:31:18.669782 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce\": container with ID starting with 97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce not found: ID does not exist" containerID="97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.669836 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce"} err="failed to get container status \"97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce\": rpc error: code = NotFound desc = could not find container \"97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce\": container with ID starting with 97f6ad92c13478447cc48b488ceb591c143a1217c15cba048a0ff1961a9c99ce not found: ID does not exist" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.669900 4769 scope.go:117] "RemoveContainer" containerID="a5838fce251c886058c1953797d392ea4195046e3c7977064d2a950f64013108" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.670160 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78c4cb665f-mg486"] Oct 06 07:31:18 crc kubenswrapper[4769]: E1006 07:31:18.671045 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5838fce251c886058c1953797d392ea4195046e3c7977064d2a950f64013108\": container with ID starting with a5838fce251c886058c1953797d392ea4195046e3c7977064d2a950f64013108 not found: ID does not exist" containerID="a5838fce251c886058c1953797d392ea4195046e3c7977064d2a950f64013108" Oct 06 07:31:18 crc kubenswrapper[4769]: I1006 07:31:18.671068 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5838fce251c886058c1953797d392ea4195046e3c7977064d2a950f64013108"} err="failed to get container status \"a5838fce251c886058c1953797d392ea4195046e3c7977064d2a950f64013108\": rpc error: code = NotFound desc = could not find container \"a5838fce251c886058c1953797d392ea4195046e3c7977064d2a950f64013108\": container with ID starting with a5838fce251c886058c1953797d392ea4195046e3c7977064d2a950f64013108 not found: ID does not exist" Oct 06 07:31:19 crc kubenswrapper[4769]: I1006 07:31:19.222567 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:19 crc kubenswrapper[4769]: E1006 07:31:19.223387 4769 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Oct 06 07:31:19 crc kubenswrapper[4769]: E1006 07:31:19.223410 4769 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Oct 06 07:31:19 crc kubenswrapper[4769]: E1006 07:31:19.223460 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift podName:91c9fe86-3c6f-485c-94a9-5adc4a88d14f nodeName:}" failed. No retries permitted until 2025-10-06 07:31:35.223446336 +0000 UTC m=+891.747727483 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift") pod "swift-storage-0" (UID: "91c9fe86-3c6f-485c-94a9-5adc4a88d14f") : configmap "swift-ring-files" not found Oct 06 07:31:19 crc kubenswrapper[4769]: I1006 07:31:19.632644 4769 generic.go:334] "Generic (PLEG): container finished" podID="c556df6a-9389-4852-b1d8-ba7bbf8bc614" containerID="1c5698815f65af07134574db2f002b37087188d4d11f54541949c9762353cdd7" exitCode=0 Oct 06 07:31:19 crc kubenswrapper[4769]: I1006 07:31:19.632721 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c556df6a-9389-4852-b1d8-ba7bbf8bc614","Type":"ContainerDied","Data":"1c5698815f65af07134574db2f002b37087188d4d11f54541949c9762353cdd7"} Oct 06 07:31:19 crc kubenswrapper[4769]: I1006 07:31:19.634878 4769 generic.go:334] "Generic (PLEG): container finished" podID="654c5c70-fc54-4a56-9fb8-c1ffe32089ca" containerID="3357c991a8b7085ad0995bf2bb18c4579abdee907c0b8d5108a27ad92d925d5a" exitCode=0 Oct 06 07:31:19 crc kubenswrapper[4769]: I1006 07:31:19.634895 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"654c5c70-fc54-4a56-9fb8-c1ffe32089ca","Type":"ContainerDied","Data":"3357c991a8b7085ad0995bf2bb18c4579abdee907c0b8d5108a27ad92d925d5a"} Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.129287 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-284d-account-create-5hzmk"] Oct 06 07:31:20 crc kubenswrapper[4769]: E1006 07:31:20.129699 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ee5649-5628-4caf-af79-6b6f6b944c79" containerName="init" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.129722 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ee5649-5628-4caf-af79-6b6f6b944c79" containerName="init" Oct 06 07:31:20 crc kubenswrapper[4769]: E1006 07:31:20.129743 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f13035c-17a8-4de2-b9c6-31e517b47675" containerName="mariadb-database-create" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.129751 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f13035c-17a8-4de2-b9c6-31e517b47675" containerName="mariadb-database-create" Oct 06 07:31:20 crc kubenswrapper[4769]: E1006 07:31:20.129760 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61ee5649-5628-4caf-af79-6b6f6b944c79" containerName="dnsmasq-dns" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.129768 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="61ee5649-5628-4caf-af79-6b6f6b944c79" containerName="dnsmasq-dns" Oct 06 07:31:20 crc kubenswrapper[4769]: E1006 07:31:20.129791 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924017e0-5e67-41ee-a60a-48d5c71da1cc" containerName="mariadb-database-create" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.129797 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="924017e0-5e67-41ee-a60a-48d5c71da1cc" containerName="mariadb-database-create" Oct 06 07:31:20 crc kubenswrapper[4769]: E1006 07:31:20.129808 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e3f9e58-995c-4420-94bb-5e9672b469b7" containerName="mariadb-database-create" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.129815 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e3f9e58-995c-4420-94bb-5e9672b469b7" containerName="mariadb-database-create" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.129992 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f13035c-17a8-4de2-b9c6-31e517b47675" containerName="mariadb-database-create" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.130005 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e3f9e58-995c-4420-94bb-5e9672b469b7" containerName="mariadb-database-create" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.130022 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="61ee5649-5628-4caf-af79-6b6f6b944c79" containerName="dnsmasq-dns" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.130033 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="924017e0-5e67-41ee-a60a-48d5c71da1cc" containerName="mariadb-database-create" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.130719 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-284d-account-create-5hzmk" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.136552 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-284d-account-create-5hzmk"] Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.137308 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.176576 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61ee5649-5628-4caf-af79-6b6f6b944c79" path="/var/lib/kubelet/pods/61ee5649-5628-4caf-af79-6b6f6b944c79/volumes" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.239367 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5z96\" (UniqueName: \"kubernetes.io/projected/1c65e686-3f39-4052-97ef-a7bf26d32ffb-kube-api-access-r5z96\") pod \"placement-284d-account-create-5hzmk\" (UID: \"1c65e686-3f39-4052-97ef-a7bf26d32ffb\") " pod="openstack/placement-284d-account-create-5hzmk" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.341709 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5z96\" (UniqueName: \"kubernetes.io/projected/1c65e686-3f39-4052-97ef-a7bf26d32ffb-kube-api-access-r5z96\") pod \"placement-284d-account-create-5hzmk\" (UID: \"1c65e686-3f39-4052-97ef-a7bf26d32ffb\") " pod="openstack/placement-284d-account-create-5hzmk" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.360040 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5z96\" (UniqueName: \"kubernetes.io/projected/1c65e686-3f39-4052-97ef-a7bf26d32ffb-kube-api-access-r5z96\") pod \"placement-284d-account-create-5hzmk\" (UID: \"1c65e686-3f39-4052-97ef-a7bf26d32ffb\") " pod="openstack/placement-284d-account-create-5hzmk" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.445236 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-284d-account-create-5hzmk" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.569680 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.577900 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-rdj69" podUID="7666a29e-0c83-4099-ae6d-1fc333d3c630" containerName="ovn-controller" probeResult="failure" output=< Oct 06 07:31:20 crc kubenswrapper[4769]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 06 07:31:20 crc kubenswrapper[4769]: > Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.588023 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-4qmzc" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.659139 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c556df6a-9389-4852-b1d8-ba7bbf8bc614","Type":"ContainerStarted","Data":"bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299"} Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.660131 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.668564 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"654c5c70-fc54-4a56-9fb8-c1ffe32089ca","Type":"ContainerStarted","Data":"cf574ca636813e8a4cbc28e21ec7c71b1f037690fa33d684e5abc31b57e31f77"} Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.669506 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.690653 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.178834272 podStartE2EDuration="1m15.690636183s" podCreationTimestamp="2025-10-06 07:30:05 +0000 UTC" firstStartedPulling="2025-10-06 07:30:07.268412059 +0000 UTC m=+803.792693206" lastFinishedPulling="2025-10-06 07:30:45.78021397 +0000 UTC m=+842.304495117" observedRunningTime="2025-10-06 07:31:20.68903898 +0000 UTC m=+877.213320127" watchObservedRunningTime="2025-10-06 07:31:20.690636183 +0000 UTC m=+877.214917330" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.720369 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.772785791 podStartE2EDuration="1m15.720352466s" podCreationTimestamp="2025-10-06 07:30:05 +0000 UTC" firstStartedPulling="2025-10-06 07:30:06.950940479 +0000 UTC m=+803.475221626" lastFinishedPulling="2025-10-06 07:30:45.898507154 +0000 UTC m=+842.422788301" observedRunningTime="2025-10-06 07:31:20.71465801 +0000 UTC m=+877.238939157" watchObservedRunningTime="2025-10-06 07:31:20.720352466 +0000 UTC m=+877.244633613" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.828450 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rdj69-config-2bmps"] Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.829603 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.831390 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.839081 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rdj69-config-2bmps"] Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.938359 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-284d-account-create-5hzmk"] Oct 06 07:31:20 crc kubenswrapper[4769]: W1006 07:31:20.944067 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c65e686_3f39_4052_97ef_a7bf26d32ffb.slice/crio-1cc110b3689524b77d34d2c1f4099a54bb30a292a544e77c1204baf93f4dad5e WatchSource:0}: Error finding container 1cc110b3689524b77d34d2c1f4099a54bb30a292a544e77c1204baf93f4dad5e: Status 404 returned error can't find the container with id 1cc110b3689524b77d34d2c1f4099a54bb30a292a544e77c1204baf93f4dad5e Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.954326 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf45804-a6de-443d-95fb-300f5990e717-additional-scripts\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.954448 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf45804-a6de-443d-95fb-300f5990e717-scripts\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.954486 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtqhv\" (UniqueName: \"kubernetes.io/projected/4cf45804-a6de-443d-95fb-300f5990e717-kube-api-access-dtqhv\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.954526 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-run\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.954584 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-log-ovn\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:20 crc kubenswrapper[4769]: I1006 07:31:20.954626 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-run-ovn\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.056304 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-log-ovn\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.056677 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-log-ovn\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.056682 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-run-ovn\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.056756 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-run-ovn\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.056874 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf45804-a6de-443d-95fb-300f5990e717-additional-scripts\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.056978 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf45804-a6de-443d-95fb-300f5990e717-scripts\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.057019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtqhv\" (UniqueName: \"kubernetes.io/projected/4cf45804-a6de-443d-95fb-300f5990e717-kube-api-access-dtqhv\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.057074 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-run\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.057293 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-run\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.057667 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf45804-a6de-443d-95fb-300f5990e717-additional-scripts\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.059435 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf45804-a6de-443d-95fb-300f5990e717-scripts\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.074296 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtqhv\" (UniqueName: \"kubernetes.io/projected/4cf45804-a6de-443d-95fb-300f5990e717-kube-api-access-dtqhv\") pod \"ovn-controller-rdj69-config-2bmps\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.187254 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.617714 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rdj69-config-2bmps"] Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.675513 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdj69-config-2bmps" event={"ID":"4cf45804-a6de-443d-95fb-300f5990e717","Type":"ContainerStarted","Data":"733b49fffa3cfefb133d2e6febe8894e76fd591defe9857acf2e1e18a4c74c0f"} Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.677085 4769 generic.go:334] "Generic (PLEG): container finished" podID="1c65e686-3f39-4052-97ef-a7bf26d32ffb" containerID="627511aae2d9b24bfafa4b6ba20a25c62edc3928f1ab91e66f6b26628035e0e9" exitCode=0 Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.677123 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-284d-account-create-5hzmk" event={"ID":"1c65e686-3f39-4052-97ef-a7bf26d32ffb","Type":"ContainerDied","Data":"627511aae2d9b24bfafa4b6ba20a25c62edc3928f1ab91e66f6b26628035e0e9"} Oct 06 07:31:21 crc kubenswrapper[4769]: I1006 07:31:21.677162 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-284d-account-create-5hzmk" event={"ID":"1c65e686-3f39-4052-97ef-a7bf26d32ffb","Type":"ContainerStarted","Data":"1cc110b3689524b77d34d2c1f4099a54bb30a292a544e77c1204baf93f4dad5e"} Oct 06 07:31:22 crc kubenswrapper[4769]: I1006 07:31:22.245628 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:31:22 crc kubenswrapper[4769]: I1006 07:31:22.245694 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:31:22 crc kubenswrapper[4769]: I1006 07:31:22.687633 4769 generic.go:334] "Generic (PLEG): container finished" podID="565d366a-8723-4b82-8b01-cd4d05b66e18" containerID="b22983abb9e17f7fffda637b0a8678e625cd5a4aad3a4b6166bd1e5f2ac61c1e" exitCode=0 Oct 06 07:31:22 crc kubenswrapper[4769]: I1006 07:31:22.687700 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5dtjw" event={"ID":"565d366a-8723-4b82-8b01-cd4d05b66e18","Type":"ContainerDied","Data":"b22983abb9e17f7fffda637b0a8678e625cd5a4aad3a4b6166bd1e5f2ac61c1e"} Oct 06 07:31:22 crc kubenswrapper[4769]: I1006 07:31:22.689047 4769 generic.go:334] "Generic (PLEG): container finished" podID="4cf45804-a6de-443d-95fb-300f5990e717" containerID="7adebf2664adf0bbe8bb25a3dffaf53061421f7b25c2542fe629dba849544e38" exitCode=0 Oct 06 07:31:22 crc kubenswrapper[4769]: I1006 07:31:22.689147 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdj69-config-2bmps" event={"ID":"4cf45804-a6de-443d-95fb-300f5990e717","Type":"ContainerDied","Data":"7adebf2664adf0bbe8bb25a3dffaf53061421f7b25c2542fe629dba849544e38"} Oct 06 07:31:22 crc kubenswrapper[4769]: I1006 07:31:22.974497 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-284d-account-create-5hzmk" Oct 06 07:31:23 crc kubenswrapper[4769]: I1006 07:31:23.089200 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5z96\" (UniqueName: \"kubernetes.io/projected/1c65e686-3f39-4052-97ef-a7bf26d32ffb-kube-api-access-r5z96\") pod \"1c65e686-3f39-4052-97ef-a7bf26d32ffb\" (UID: \"1c65e686-3f39-4052-97ef-a7bf26d32ffb\") " Oct 06 07:31:23 crc kubenswrapper[4769]: I1006 07:31:23.093894 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c65e686-3f39-4052-97ef-a7bf26d32ffb-kube-api-access-r5z96" (OuterVolumeSpecName: "kube-api-access-r5z96") pod "1c65e686-3f39-4052-97ef-a7bf26d32ffb" (UID: "1c65e686-3f39-4052-97ef-a7bf26d32ffb"). InnerVolumeSpecName "kube-api-access-r5z96". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:23 crc kubenswrapper[4769]: I1006 07:31:23.190732 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5z96\" (UniqueName: \"kubernetes.io/projected/1c65e686-3f39-4052-97ef-a7bf26d32ffb-kube-api-access-r5z96\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:23 crc kubenswrapper[4769]: I1006 07:31:23.699874 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-284d-account-create-5hzmk" event={"ID":"1c65e686-3f39-4052-97ef-a7bf26d32ffb","Type":"ContainerDied","Data":"1cc110b3689524b77d34d2c1f4099a54bb30a292a544e77c1204baf93f4dad5e"} Oct 06 07:31:23 crc kubenswrapper[4769]: I1006 07:31:23.700224 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cc110b3689524b77d34d2c1f4099a54bb30a292a544e77c1204baf93f4dad5e" Oct 06 07:31:23 crc kubenswrapper[4769]: I1006 07:31:23.699932 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-284d-account-create-5hzmk" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.077852 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.099938 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207026 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-run\") pod \"4cf45804-a6de-443d-95fb-300f5990e717\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207061 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtqhv\" (UniqueName: \"kubernetes.io/projected/4cf45804-a6de-443d-95fb-300f5990e717-kube-api-access-dtqhv\") pod \"4cf45804-a6de-443d-95fb-300f5990e717\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207118 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf45804-a6de-443d-95fb-300f5990e717-additional-scripts\") pod \"4cf45804-a6de-443d-95fb-300f5990e717\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207121 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-run" (OuterVolumeSpecName: "var-run") pod "4cf45804-a6de-443d-95fb-300f5990e717" (UID: "4cf45804-a6de-443d-95fb-300f5990e717"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207178 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/565d366a-8723-4b82-8b01-cd4d05b66e18-scripts\") pod \"565d366a-8723-4b82-8b01-cd4d05b66e18\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207199 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdjft\" (UniqueName: \"kubernetes.io/projected/565d366a-8723-4b82-8b01-cd4d05b66e18-kube-api-access-pdjft\") pod \"565d366a-8723-4b82-8b01-cd4d05b66e18\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207216 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-dispersionconf\") pod \"565d366a-8723-4b82-8b01-cd4d05b66e18\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207240 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-combined-ca-bundle\") pod \"565d366a-8723-4b82-8b01-cd4d05b66e18\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207275 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-swiftconf\") pod \"565d366a-8723-4b82-8b01-cd4d05b66e18\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207310 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/565d366a-8723-4b82-8b01-cd4d05b66e18-ring-data-devices\") pod \"565d366a-8723-4b82-8b01-cd4d05b66e18\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207328 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-run-ovn\") pod \"4cf45804-a6de-443d-95fb-300f5990e717\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207345 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-log-ovn\") pod \"4cf45804-a6de-443d-95fb-300f5990e717\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207364 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/565d366a-8723-4b82-8b01-cd4d05b66e18-etc-swift\") pod \"565d366a-8723-4b82-8b01-cd4d05b66e18\" (UID: \"565d366a-8723-4b82-8b01-cd4d05b66e18\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207412 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf45804-a6de-443d-95fb-300f5990e717-scripts\") pod \"4cf45804-a6de-443d-95fb-300f5990e717\" (UID: \"4cf45804-a6de-443d-95fb-300f5990e717\") " Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.207694 4769 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-run\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.208437 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf45804-a6de-443d-95fb-300f5990e717-scripts" (OuterVolumeSpecName: "scripts") pod "4cf45804-a6de-443d-95fb-300f5990e717" (UID: "4cf45804-a6de-443d-95fb-300f5990e717"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.208716 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/565d366a-8723-4b82-8b01-cd4d05b66e18-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "565d366a-8723-4b82-8b01-cd4d05b66e18" (UID: "565d366a-8723-4b82-8b01-cd4d05b66e18"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.208737 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4cf45804-a6de-443d-95fb-300f5990e717" (UID: "4cf45804-a6de-443d-95fb-300f5990e717"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.208752 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4cf45804-a6de-443d-95fb-300f5990e717" (UID: "4cf45804-a6de-443d-95fb-300f5990e717"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.209244 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/565d366a-8723-4b82-8b01-cd4d05b66e18-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "565d366a-8723-4b82-8b01-cd4d05b66e18" (UID: "565d366a-8723-4b82-8b01-cd4d05b66e18"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.209464 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cf45804-a6de-443d-95fb-300f5990e717-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4cf45804-a6de-443d-95fb-300f5990e717" (UID: "4cf45804-a6de-443d-95fb-300f5990e717"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.211690 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf45804-a6de-443d-95fb-300f5990e717-kube-api-access-dtqhv" (OuterVolumeSpecName: "kube-api-access-dtqhv") pod "4cf45804-a6de-443d-95fb-300f5990e717" (UID: "4cf45804-a6de-443d-95fb-300f5990e717"). InnerVolumeSpecName "kube-api-access-dtqhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.229330 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/565d366a-8723-4b82-8b01-cd4d05b66e18-kube-api-access-pdjft" (OuterVolumeSpecName: "kube-api-access-pdjft") pod "565d366a-8723-4b82-8b01-cd4d05b66e18" (UID: "565d366a-8723-4b82-8b01-cd4d05b66e18"). InnerVolumeSpecName "kube-api-access-pdjft". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.229701 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "565d366a-8723-4b82-8b01-cd4d05b66e18" (UID: "565d366a-8723-4b82-8b01-cd4d05b66e18"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.230214 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/565d366a-8723-4b82-8b01-cd4d05b66e18-scripts" (OuterVolumeSpecName: "scripts") pod "565d366a-8723-4b82-8b01-cd4d05b66e18" (UID: "565d366a-8723-4b82-8b01-cd4d05b66e18"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.232268 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "565d366a-8723-4b82-8b01-cd4d05b66e18" (UID: "565d366a-8723-4b82-8b01-cd4d05b66e18"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.234779 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "565d366a-8723-4b82-8b01-cd4d05b66e18" (UID: "565d366a-8723-4b82-8b01-cd4d05b66e18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.309057 4769 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/565d366a-8723-4b82-8b01-cd4d05b66e18-ring-data-devices\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.309284 4769 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.309362 4769 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4cf45804-a6de-443d-95fb-300f5990e717-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.309472 4769 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/565d366a-8723-4b82-8b01-cd4d05b66e18-etc-swift\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.309544 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4cf45804-a6de-443d-95fb-300f5990e717-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.309599 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtqhv\" (UniqueName: \"kubernetes.io/projected/4cf45804-a6de-443d-95fb-300f5990e717-kube-api-access-dtqhv\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.309676 4769 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4cf45804-a6de-443d-95fb-300f5990e717-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.309730 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/565d366a-8723-4b82-8b01-cd4d05b66e18-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.309788 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdjft\" (UniqueName: \"kubernetes.io/projected/565d366a-8723-4b82-8b01-cd4d05b66e18-kube-api-access-pdjft\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.309849 4769 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-dispersionconf\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.309907 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.309959 4769 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/565d366a-8723-4b82-8b01-cd4d05b66e18-swiftconf\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.707898 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdj69-config-2bmps" event={"ID":"4cf45804-a6de-443d-95fb-300f5990e717","Type":"ContainerDied","Data":"733b49fffa3cfefb133d2e6febe8894e76fd591defe9857acf2e1e18a4c74c0f"} Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.708583 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="733b49fffa3cfefb133d2e6febe8894e76fd591defe9857acf2e1e18a4c74c0f" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.707934 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdj69-config-2bmps" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.718842 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5dtjw" event={"ID":"565d366a-8723-4b82-8b01-cd4d05b66e18","Type":"ContainerDied","Data":"798ce33517caefce797fdc06919ac8ae790b1d796e5c2a9e352a841a36178199"} Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.718885 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798ce33517caefce797fdc06919ac8ae790b1d796e5c2a9e352a841a36178199" Oct 06 07:31:24 crc kubenswrapper[4769]: I1006 07:31:24.718915 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5dtjw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.208296 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rdj69-config-2bmps"] Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.213860 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rdj69-config-2bmps"] Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.320099 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rdj69-config-h5szw"] Oct 06 07:31:25 crc kubenswrapper[4769]: E1006 07:31:25.320435 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf45804-a6de-443d-95fb-300f5990e717" containerName="ovn-config" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.320450 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf45804-a6de-443d-95fb-300f5990e717" containerName="ovn-config" Oct 06 07:31:25 crc kubenswrapper[4769]: E1006 07:31:25.320466 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c65e686-3f39-4052-97ef-a7bf26d32ffb" containerName="mariadb-account-create" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.320472 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c65e686-3f39-4052-97ef-a7bf26d32ffb" containerName="mariadb-account-create" Oct 06 07:31:25 crc kubenswrapper[4769]: E1006 07:31:25.320479 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="565d366a-8723-4b82-8b01-cd4d05b66e18" containerName="swift-ring-rebalance" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.320485 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="565d366a-8723-4b82-8b01-cd4d05b66e18" containerName="swift-ring-rebalance" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.320637 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="565d366a-8723-4b82-8b01-cd4d05b66e18" containerName="swift-ring-rebalance" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.320657 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf45804-a6de-443d-95fb-300f5990e717" containerName="ovn-config" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.320666 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c65e686-3f39-4052-97ef-a7bf26d32ffb" containerName="mariadb-account-create" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.321138 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.331545 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.333499 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rdj69-config-h5szw"] Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.409941 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-93a1-account-create-s99j6"] Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.414040 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-93a1-account-create-s99j6" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.417941 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.432348 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-log-ovn\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.432431 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-run\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.432495 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-additional-scripts\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.433024 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n42qp\" (UniqueName: \"kubernetes.io/projected/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-kube-api-access-n42qp\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.433086 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-run-ovn\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.433149 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-scripts\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.443374 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-93a1-account-create-s99j6"] Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.534266 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-log-ovn\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.534615 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-run\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.534763 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-additional-scripts\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.534873 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n42qp\" (UniqueName: \"kubernetes.io/projected/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-kube-api-access-n42qp\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.535009 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-run-ovn\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.535123 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-run\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.535132 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-scripts\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.535240 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dldzr\" (UniqueName: \"kubernetes.io/projected/104ca122-82e6-441e-ae08-76ab2b828724-kube-api-access-dldzr\") pod \"glance-93a1-account-create-s99j6\" (UID: \"104ca122-82e6-441e-ae08-76ab2b828724\") " pod="openstack/glance-93a1-account-create-s99j6" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.535283 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-log-ovn\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.535470 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-run-ovn\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.537371 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-additional-scripts\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.538944 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-scripts\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.540632 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-rdj69" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.562779 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n42qp\" (UniqueName: \"kubernetes.io/projected/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-kube-api-access-n42qp\") pod \"ovn-controller-rdj69-config-h5szw\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.636739 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dldzr\" (UniqueName: \"kubernetes.io/projected/104ca122-82e6-441e-ae08-76ab2b828724-kube-api-access-dldzr\") pod \"glance-93a1-account-create-s99j6\" (UID: \"104ca122-82e6-441e-ae08-76ab2b828724\") " pod="openstack/glance-93a1-account-create-s99j6" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.642142 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.657826 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dldzr\" (UniqueName: \"kubernetes.io/projected/104ca122-82e6-441e-ae08-76ab2b828724-kube-api-access-dldzr\") pod \"glance-93a1-account-create-s99j6\" (UID: \"104ca122-82e6-441e-ae08-76ab2b828724\") " pod="openstack/glance-93a1-account-create-s99j6" Oct 06 07:31:25 crc kubenswrapper[4769]: I1006 07:31:25.737150 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-93a1-account-create-s99j6" Oct 06 07:31:26 crc kubenswrapper[4769]: I1006 07:31:26.097704 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rdj69-config-h5szw"] Oct 06 07:31:26 crc kubenswrapper[4769]: W1006 07:31:26.105938 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e551ce4_5eb3_49ec_8e0f_b6f7b7fdfc87.slice/crio-d28abc90eb5abd127c377941e0485fac62a4b2de9420008bbc399d51ede19c4d WatchSource:0}: Error finding container d28abc90eb5abd127c377941e0485fac62a4b2de9420008bbc399d51ede19c4d: Status 404 returned error can't find the container with id d28abc90eb5abd127c377941e0485fac62a4b2de9420008bbc399d51ede19c4d Oct 06 07:31:26 crc kubenswrapper[4769]: I1006 07:31:26.178663 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf45804-a6de-443d-95fb-300f5990e717" path="/var/lib/kubelet/pods/4cf45804-a6de-443d-95fb-300f5990e717/volumes" Oct 06 07:31:26 crc kubenswrapper[4769]: I1006 07:31:26.186638 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-93a1-account-create-s99j6"] Oct 06 07:31:26 crc kubenswrapper[4769]: W1006 07:31:26.194879 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod104ca122_82e6_441e_ae08_76ab2b828724.slice/crio-88cf483c01cb01f20bc5026666211fc41a0c8e2efa434637a97416e24922bf43 WatchSource:0}: Error finding container 88cf483c01cb01f20bc5026666211fc41a0c8e2efa434637a97416e24922bf43: Status 404 returned error can't find the container with id 88cf483c01cb01f20bc5026666211fc41a0c8e2efa434637a97416e24922bf43 Oct 06 07:31:26 crc kubenswrapper[4769]: I1006 07:31:26.741255 4769 generic.go:334] "Generic (PLEG): container finished" podID="104ca122-82e6-441e-ae08-76ab2b828724" containerID="4b285e7f16fb4df0a3cdc261b7146f5f58d0d58624eed958347a3ae2266ecc7e" exitCode=0 Oct 06 07:31:26 crc kubenswrapper[4769]: I1006 07:31:26.741388 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-93a1-account-create-s99j6" event={"ID":"104ca122-82e6-441e-ae08-76ab2b828724","Type":"ContainerDied","Data":"4b285e7f16fb4df0a3cdc261b7146f5f58d0d58624eed958347a3ae2266ecc7e"} Oct 06 07:31:26 crc kubenswrapper[4769]: I1006 07:31:26.741635 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-93a1-account-create-s99j6" event={"ID":"104ca122-82e6-441e-ae08-76ab2b828724","Type":"ContainerStarted","Data":"88cf483c01cb01f20bc5026666211fc41a0c8e2efa434637a97416e24922bf43"} Oct 06 07:31:26 crc kubenswrapper[4769]: I1006 07:31:26.746699 4769 generic.go:334] "Generic (PLEG): container finished" podID="3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87" containerID="8d0ba270cffa0f3b1f1e3d59d9e9dc1627c2b61cebced653728e1d11a9b0bfe2" exitCode=0 Oct 06 07:31:26 crc kubenswrapper[4769]: I1006 07:31:26.746744 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdj69-config-h5szw" event={"ID":"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87","Type":"ContainerDied","Data":"8d0ba270cffa0f3b1f1e3d59d9e9dc1627c2b61cebced653728e1d11a9b0bfe2"} Oct 06 07:31:26 crc kubenswrapper[4769]: I1006 07:31:26.746792 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdj69-config-h5szw" event={"ID":"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87","Type":"ContainerStarted","Data":"d28abc90eb5abd127c377941e0485fac62a4b2de9420008bbc399d51ede19c4d"} Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.134151 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.141767 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-93a1-account-create-s99j6" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.285575 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dldzr\" (UniqueName: \"kubernetes.io/projected/104ca122-82e6-441e-ae08-76ab2b828724-kube-api-access-dldzr\") pod \"104ca122-82e6-441e-ae08-76ab2b828724\" (UID: \"104ca122-82e6-441e-ae08-76ab2b828724\") " Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.285634 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-log-ovn\") pod \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.285728 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-additional-scripts\") pod \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.285746 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-scripts\") pod \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.285835 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-run\") pod \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.285858 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n42qp\" (UniqueName: \"kubernetes.io/projected/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-kube-api-access-n42qp\") pod \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.285900 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-run-ovn\") pod \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\" (UID: \"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87\") " Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.286205 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87" (UID: "3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.286245 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87" (UID: "3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.286781 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87" (UID: "3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.286923 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-run" (OuterVolumeSpecName: "var-run") pod "3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87" (UID: "3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.287497 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-scripts" (OuterVolumeSpecName: "scripts") pod "3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87" (UID: "3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.291313 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104ca122-82e6-441e-ae08-76ab2b828724-kube-api-access-dldzr" (OuterVolumeSpecName: "kube-api-access-dldzr") pod "104ca122-82e6-441e-ae08-76ab2b828724" (UID: "104ca122-82e6-441e-ae08-76ab2b828724"). InnerVolumeSpecName "kube-api-access-dldzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.292946 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-kube-api-access-n42qp" (OuterVolumeSpecName: "kube-api-access-n42qp") pod "3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87" (UID: "3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87"). InnerVolumeSpecName "kube-api-access-n42qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.387966 4769 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.388012 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.388030 4769 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-run\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.388047 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n42qp\" (UniqueName: \"kubernetes.io/projected/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-kube-api-access-n42qp\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.388065 4769 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.388082 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dldzr\" (UniqueName: \"kubernetes.io/projected/104ca122-82e6-441e-ae08-76ab2b828724-kube-api-access-dldzr\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.388098 4769 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.777399 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rdj69-config-h5szw" event={"ID":"3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87","Type":"ContainerDied","Data":"d28abc90eb5abd127c377941e0485fac62a4b2de9420008bbc399d51ede19c4d"} Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.777968 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d28abc90eb5abd127c377941e0485fac62a4b2de9420008bbc399d51ede19c4d" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.778107 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rdj69-config-h5szw" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.793856 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-93a1-account-create-s99j6" event={"ID":"104ca122-82e6-441e-ae08-76ab2b828724","Type":"ContainerDied","Data":"88cf483c01cb01f20bc5026666211fc41a0c8e2efa434637a97416e24922bf43"} Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.794314 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88cf483c01cb01f20bc5026666211fc41a0c8e2efa434637a97416e24922bf43" Oct 06 07:31:28 crc kubenswrapper[4769]: I1006 07:31:28.793956 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-93a1-account-create-s99j6" Oct 06 07:31:29 crc kubenswrapper[4769]: I1006 07:31:29.229650 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rdj69-config-h5szw"] Oct 06 07:31:29 crc kubenswrapper[4769]: I1006 07:31:29.237167 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rdj69-config-h5szw"] Oct 06 07:31:29 crc kubenswrapper[4769]: I1006 07:31:29.850109 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6e6c-account-create-52h7l"] Oct 06 07:31:29 crc kubenswrapper[4769]: E1006 07:31:29.850518 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87" containerName="ovn-config" Oct 06 07:31:29 crc kubenswrapper[4769]: I1006 07:31:29.850550 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87" containerName="ovn-config" Oct 06 07:31:29 crc kubenswrapper[4769]: E1006 07:31:29.850594 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104ca122-82e6-441e-ae08-76ab2b828724" containerName="mariadb-account-create" Oct 06 07:31:29 crc kubenswrapper[4769]: I1006 07:31:29.850606 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="104ca122-82e6-441e-ae08-76ab2b828724" containerName="mariadb-account-create" Oct 06 07:31:29 crc kubenswrapper[4769]: I1006 07:31:29.850828 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87" containerName="ovn-config" Oct 06 07:31:29 crc kubenswrapper[4769]: I1006 07:31:29.850859 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="104ca122-82e6-441e-ae08-76ab2b828724" containerName="mariadb-account-create" Oct 06 07:31:29 crc kubenswrapper[4769]: I1006 07:31:29.851503 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6e6c-account-create-52h7l" Oct 06 07:31:29 crc kubenswrapper[4769]: I1006 07:31:29.854466 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 06 07:31:29 crc kubenswrapper[4769]: I1006 07:31:29.860992 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6e6c-account-create-52h7l"] Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.014076 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnkt6\" (UniqueName: \"kubernetes.io/projected/e61dcddc-b711-44c0-8b71-03cbb04c8f69-kube-api-access-mnkt6\") pod \"keystone-6e6c-account-create-52h7l\" (UID: \"e61dcddc-b711-44c0-8b71-03cbb04c8f69\") " pod="openstack/keystone-6e6c-account-create-52h7l" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.116343 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnkt6\" (UniqueName: \"kubernetes.io/projected/e61dcddc-b711-44c0-8b71-03cbb04c8f69-kube-api-access-mnkt6\") pod \"keystone-6e6c-account-create-52h7l\" (UID: \"e61dcddc-b711-44c0-8b71-03cbb04c8f69\") " pod="openstack/keystone-6e6c-account-create-52h7l" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.155434 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnkt6\" (UniqueName: \"kubernetes.io/projected/e61dcddc-b711-44c0-8b71-03cbb04c8f69-kube-api-access-mnkt6\") pod \"keystone-6e6c-account-create-52h7l\" (UID: \"e61dcddc-b711-44c0-8b71-03cbb04c8f69\") " pod="openstack/keystone-6e6c-account-create-52h7l" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.169674 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6e6c-account-create-52h7l" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.177055 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87" path="/var/lib/kubelet/pods/3e551ce4-5eb3-49ec-8e0f-b6f7b7fdfc87/volumes" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.575824 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mqvfk"] Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.577270 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.580352 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.580851 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jlxmx" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.589500 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mqvfk"] Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.645780 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6e6c-account-create-52h7l"] Oct 06 07:31:30 crc kubenswrapper[4769]: W1006 07:31:30.649185 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode61dcddc_b711_44c0_8b71_03cbb04c8f69.slice/crio-da3e6f3037c7b0be85378251b76263ee9280426b61efa53b133e686392d7eeaa WatchSource:0}: Error finding container da3e6f3037c7b0be85378251b76263ee9280426b61efa53b133e686392d7eeaa: Status 404 returned error can't find the container with id da3e6f3037c7b0be85378251b76263ee9280426b61efa53b133e686392d7eeaa Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.725557 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-combined-ca-bundle\") pod \"glance-db-sync-mqvfk\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.725629 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-config-data\") pod \"glance-db-sync-mqvfk\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.725853 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlj5t\" (UniqueName: \"kubernetes.io/projected/586f85b0-c0bf-473c-9446-b9263c3ff95b-kube-api-access-rlj5t\") pod \"glance-db-sync-mqvfk\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.725914 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-db-sync-config-data\") pod \"glance-db-sync-mqvfk\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.809163 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6e6c-account-create-52h7l" event={"ID":"e61dcddc-b711-44c0-8b71-03cbb04c8f69","Type":"ContainerStarted","Data":"61f8d9b4c580899462a2cd040dd367cdf2e20f7cffb6d53352c4fe5dbdf17cea"} Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.809575 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6e6c-account-create-52h7l" event={"ID":"e61dcddc-b711-44c0-8b71-03cbb04c8f69","Type":"ContainerStarted","Data":"da3e6f3037c7b0be85378251b76263ee9280426b61efa53b133e686392d7eeaa"} Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.822247 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6e6c-account-create-52h7l" podStartSLOduration=1.82222908 podStartE2EDuration="1.82222908s" podCreationTimestamp="2025-10-06 07:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:31:30.822012294 +0000 UTC m=+887.346293441" watchObservedRunningTime="2025-10-06 07:31:30.82222908 +0000 UTC m=+887.346510227" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.827959 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-combined-ca-bundle\") pod \"glance-db-sync-mqvfk\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.828019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-config-data\") pod \"glance-db-sync-mqvfk\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.828083 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlj5t\" (UniqueName: \"kubernetes.io/projected/586f85b0-c0bf-473c-9446-b9263c3ff95b-kube-api-access-rlj5t\") pod \"glance-db-sync-mqvfk\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.828105 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-db-sync-config-data\") pod \"glance-db-sync-mqvfk\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.834069 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-combined-ca-bundle\") pod \"glance-db-sync-mqvfk\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.834298 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-config-data\") pod \"glance-db-sync-mqvfk\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.834308 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-db-sync-config-data\") pod \"glance-db-sync-mqvfk\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.845057 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlj5t\" (UniqueName: \"kubernetes.io/projected/586f85b0-c0bf-473c-9446-b9263c3ff95b-kube-api-access-rlj5t\") pod \"glance-db-sync-mqvfk\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:30 crc kubenswrapper[4769]: I1006 07:31:30.897005 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mqvfk" Oct 06 07:31:31 crc kubenswrapper[4769]: I1006 07:31:31.405161 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mqvfk"] Oct 06 07:31:31 crc kubenswrapper[4769]: I1006 07:31:31.820131 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mqvfk" event={"ID":"586f85b0-c0bf-473c-9446-b9263c3ff95b","Type":"ContainerStarted","Data":"0f63cb1d59bad768cb0863d61ec21169013d84677c19ed438c71ed50025d422d"} Oct 06 07:31:31 crc kubenswrapper[4769]: I1006 07:31:31.826720 4769 generic.go:334] "Generic (PLEG): container finished" podID="e61dcddc-b711-44c0-8b71-03cbb04c8f69" containerID="61f8d9b4c580899462a2cd040dd367cdf2e20f7cffb6d53352c4fe5dbdf17cea" exitCode=0 Oct 06 07:31:31 crc kubenswrapper[4769]: I1006 07:31:31.826765 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6e6c-account-create-52h7l" event={"ID":"e61dcddc-b711-44c0-8b71-03cbb04c8f69","Type":"ContainerDied","Data":"61f8d9b4c580899462a2cd040dd367cdf2e20f7cffb6d53352c4fe5dbdf17cea"} Oct 06 07:31:33 crc kubenswrapper[4769]: I1006 07:31:33.125596 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6e6c-account-create-52h7l" Oct 06 07:31:33 crc kubenswrapper[4769]: I1006 07:31:33.264573 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnkt6\" (UniqueName: \"kubernetes.io/projected/e61dcddc-b711-44c0-8b71-03cbb04c8f69-kube-api-access-mnkt6\") pod \"e61dcddc-b711-44c0-8b71-03cbb04c8f69\" (UID: \"e61dcddc-b711-44c0-8b71-03cbb04c8f69\") " Oct 06 07:31:33 crc kubenswrapper[4769]: I1006 07:31:33.269596 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61dcddc-b711-44c0-8b71-03cbb04c8f69-kube-api-access-mnkt6" (OuterVolumeSpecName: "kube-api-access-mnkt6") pod "e61dcddc-b711-44c0-8b71-03cbb04c8f69" (UID: "e61dcddc-b711-44c0-8b71-03cbb04c8f69"). InnerVolumeSpecName "kube-api-access-mnkt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:33 crc kubenswrapper[4769]: I1006 07:31:33.366654 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnkt6\" (UniqueName: \"kubernetes.io/projected/e61dcddc-b711-44c0-8b71-03cbb04c8f69-kube-api-access-mnkt6\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:33 crc kubenswrapper[4769]: I1006 07:31:33.845990 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6e6c-account-create-52h7l" event={"ID":"e61dcddc-b711-44c0-8b71-03cbb04c8f69","Type":"ContainerDied","Data":"da3e6f3037c7b0be85378251b76263ee9280426b61efa53b133e686392d7eeaa"} Oct 06 07:31:33 crc kubenswrapper[4769]: I1006 07:31:33.846033 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6e6c-account-create-52h7l" Oct 06 07:31:33 crc kubenswrapper[4769]: I1006 07:31:33.846039 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da3e6f3037c7b0be85378251b76263ee9280426b61efa53b133e686392d7eeaa" Oct 06 07:31:35 crc kubenswrapper[4769]: I1006 07:31:35.302720 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:35 crc kubenswrapper[4769]: I1006 07:31:35.310236 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/91c9fe86-3c6f-485c-94a9-5adc4a88d14f-etc-swift\") pod \"swift-storage-0\" (UID: \"91c9fe86-3c6f-485c-94a9-5adc4a88d14f\") " pod="openstack/swift-storage-0" Oct 06 07:31:35 crc kubenswrapper[4769]: I1006 07:31:35.332249 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Oct 06 07:31:36 crc kubenswrapper[4769]: I1006 07:31:35.862446 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Oct 06 07:31:36 crc kubenswrapper[4769]: I1006 07:31:36.496718 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:31:36 crc kubenswrapper[4769]: I1006 07:31:36.816609 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 06 07:31:36 crc kubenswrapper[4769]: I1006 07:31:36.879072 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"401dc0d6e80266b3d6bde0623161827930ed03f5025daa7953a9b0cfab9235f8"} Oct 06 07:31:36 crc kubenswrapper[4769]: I1006 07:31:36.879118 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"802b72f8b158cefda6355e725952dd751133d3069783e64ced87c1f1c15ece44"} Oct 06 07:31:37 crc kubenswrapper[4769]: I1006 07:31:37.875134 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qx4qw"] Oct 06 07:31:37 crc kubenswrapper[4769]: E1006 07:31:37.875771 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61dcddc-b711-44c0-8b71-03cbb04c8f69" containerName="mariadb-account-create" Oct 06 07:31:37 crc kubenswrapper[4769]: I1006 07:31:37.875784 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61dcddc-b711-44c0-8b71-03cbb04c8f69" containerName="mariadb-account-create" Oct 06 07:31:37 crc kubenswrapper[4769]: I1006 07:31:37.875983 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61dcddc-b711-44c0-8b71-03cbb04c8f69" containerName="mariadb-account-create" Oct 06 07:31:37 crc kubenswrapper[4769]: I1006 07:31:37.876470 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qx4qw" Oct 06 07:31:37 crc kubenswrapper[4769]: I1006 07:31:37.890864 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qx4qw"] Oct 06 07:31:37 crc kubenswrapper[4769]: I1006 07:31:37.906703 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"ef062f8aadfe3487fe0ec8fcbbe6f317d40a73943e24f3575b52bed02a5d425a"} Oct 06 07:31:37 crc kubenswrapper[4769]: I1006 07:31:37.906747 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"4029e2e61fde3a65a7f2ac096e7c9fd41a037236cca678bc0d7b4bf7b6bb9ffa"} Oct 06 07:31:37 crc kubenswrapper[4769]: I1006 07:31:37.906757 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"673620edb65aef18ae9c0d2753fcb4e7600ab1d85d20f09df5dbb1c27ace69d6"} Oct 06 07:31:37 crc kubenswrapper[4769]: I1006 07:31:37.960101 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59fzq\" (UniqueName: \"kubernetes.io/projected/7e6a4579-eb83-4d6a-979b-375d1844efd0-kube-api-access-59fzq\") pod \"cinder-db-create-qx4qw\" (UID: \"7e6a4579-eb83-4d6a-979b-375d1844efd0\") " pod="openstack/cinder-db-create-qx4qw" Oct 06 07:31:37 crc kubenswrapper[4769]: I1006 07:31:37.997257 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-xhzkf"] Oct 06 07:31:37 crc kubenswrapper[4769]: I1006 07:31:37.999298 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xhzkf" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.019905 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xhzkf"] Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.061413 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59fzq\" (UniqueName: \"kubernetes.io/projected/7e6a4579-eb83-4d6a-979b-375d1844efd0-kube-api-access-59fzq\") pod \"cinder-db-create-qx4qw\" (UID: \"7e6a4579-eb83-4d6a-979b-375d1844efd0\") " pod="openstack/cinder-db-create-qx4qw" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.061515 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g5m4\" (UniqueName: \"kubernetes.io/projected/484e70cd-c319-46ae-8936-54c215567b03-kube-api-access-4g5m4\") pod \"barbican-db-create-xhzkf\" (UID: \"484e70cd-c319-46ae-8936-54c215567b03\") " pod="openstack/barbican-db-create-xhzkf" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.088298 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59fzq\" (UniqueName: \"kubernetes.io/projected/7e6a4579-eb83-4d6a-979b-375d1844efd0-kube-api-access-59fzq\") pod \"cinder-db-create-qx4qw\" (UID: \"7e6a4579-eb83-4d6a-979b-375d1844efd0\") " pod="openstack/cinder-db-create-qx4qw" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.163306 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g5m4\" (UniqueName: \"kubernetes.io/projected/484e70cd-c319-46ae-8936-54c215567b03-kube-api-access-4g5m4\") pod \"barbican-db-create-xhzkf\" (UID: \"484e70cd-c319-46ae-8936-54c215567b03\") " pod="openstack/barbican-db-create-xhzkf" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.181268 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g5m4\" (UniqueName: \"kubernetes.io/projected/484e70cd-c319-46ae-8936-54c215567b03-kube-api-access-4g5m4\") pod \"barbican-db-create-xhzkf\" (UID: \"484e70cd-c319-46ae-8936-54c215567b03\") " pod="openstack/barbican-db-create-xhzkf" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.214406 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qx4qw" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.225941 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x9tdl"] Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.227347 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.230018 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.230237 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.230316 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wgm6x" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.230538 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.237456 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x9tdl"] Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.272163 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-w6j2v"] Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.273170 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w6j2v" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.287058 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w6j2v"] Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.324279 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xhzkf" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.367307 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvzhw\" (UniqueName: \"kubernetes.io/projected/03b6f9be-40d3-47be-964f-e271e14a0d84-kube-api-access-gvzhw\") pod \"keystone-db-sync-x9tdl\" (UID: \"03b6f9be-40d3-47be-964f-e271e14a0d84\") " pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.367365 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b6f9be-40d3-47be-964f-e271e14a0d84-combined-ca-bundle\") pod \"keystone-db-sync-x9tdl\" (UID: \"03b6f9be-40d3-47be-964f-e271e14a0d84\") " pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.367460 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcq8h\" (UniqueName: \"kubernetes.io/projected/8263a2a5-f139-46de-bc0c-3457a8ffee1a-kube-api-access-vcq8h\") pod \"neutron-db-create-w6j2v\" (UID: \"8263a2a5-f139-46de-bc0c-3457a8ffee1a\") " pod="openstack/neutron-db-create-w6j2v" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.367509 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b6f9be-40d3-47be-964f-e271e14a0d84-config-data\") pod \"keystone-db-sync-x9tdl\" (UID: \"03b6f9be-40d3-47be-964f-e271e14a0d84\") " pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.468919 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcq8h\" (UniqueName: \"kubernetes.io/projected/8263a2a5-f139-46de-bc0c-3457a8ffee1a-kube-api-access-vcq8h\") pod \"neutron-db-create-w6j2v\" (UID: \"8263a2a5-f139-46de-bc0c-3457a8ffee1a\") " pod="openstack/neutron-db-create-w6j2v" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.469040 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b6f9be-40d3-47be-964f-e271e14a0d84-config-data\") pod \"keystone-db-sync-x9tdl\" (UID: \"03b6f9be-40d3-47be-964f-e271e14a0d84\") " pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.469100 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvzhw\" (UniqueName: \"kubernetes.io/projected/03b6f9be-40d3-47be-964f-e271e14a0d84-kube-api-access-gvzhw\") pod \"keystone-db-sync-x9tdl\" (UID: \"03b6f9be-40d3-47be-964f-e271e14a0d84\") " pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.469155 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b6f9be-40d3-47be-964f-e271e14a0d84-combined-ca-bundle\") pod \"keystone-db-sync-x9tdl\" (UID: \"03b6f9be-40d3-47be-964f-e271e14a0d84\") " pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.480506 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b6f9be-40d3-47be-964f-e271e14a0d84-config-data\") pod \"keystone-db-sync-x9tdl\" (UID: \"03b6f9be-40d3-47be-964f-e271e14a0d84\") " pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.484532 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b6f9be-40d3-47be-964f-e271e14a0d84-combined-ca-bundle\") pod \"keystone-db-sync-x9tdl\" (UID: \"03b6f9be-40d3-47be-964f-e271e14a0d84\") " pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.492007 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvzhw\" (UniqueName: \"kubernetes.io/projected/03b6f9be-40d3-47be-964f-e271e14a0d84-kube-api-access-gvzhw\") pod \"keystone-db-sync-x9tdl\" (UID: \"03b6f9be-40d3-47be-964f-e271e14a0d84\") " pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.494157 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcq8h\" (UniqueName: \"kubernetes.io/projected/8263a2a5-f139-46de-bc0c-3457a8ffee1a-kube-api-access-vcq8h\") pod \"neutron-db-create-w6j2v\" (UID: \"8263a2a5-f139-46de-bc0c-3457a8ffee1a\") " pod="openstack/neutron-db-create-w6j2v" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.551926 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.596009 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w6j2v" Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.634763 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qx4qw"] Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.929015 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-xhzkf"] Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.936485 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"e631f94970a66a7f599ccfe9feebe3fe64d3b0b3afc5fb9237e5ebdb985780d7"} Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.936803 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"c459dfa150200b668c289eff29516fd5538d8f0b00f98320aceae84bdff10641"} Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.939920 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qx4qw" event={"ID":"7e6a4579-eb83-4d6a-979b-375d1844efd0","Type":"ContainerStarted","Data":"aaf545cfe19bb0ecc8004b06dfc4c31ce6f753a526ab5193183bd41b3f22ea30"} Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.939957 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qx4qw" event={"ID":"7e6a4579-eb83-4d6a-979b-375d1844efd0","Type":"ContainerStarted","Data":"81cfc5361a8653d516e507fcab4abc808c175f2c5b44e03bd27db7bb9fdb2ba0"} Oct 06 07:31:38 crc kubenswrapper[4769]: I1006 07:31:38.957814 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-qx4qw" podStartSLOduration=1.957794137 podStartE2EDuration="1.957794137s" podCreationTimestamp="2025-10-06 07:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:31:38.955127714 +0000 UTC m=+895.479408861" watchObservedRunningTime="2025-10-06 07:31:38.957794137 +0000 UTC m=+895.482075284" Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.020444 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x9tdl"] Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.104641 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-w6j2v"] Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.969089 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x9tdl" event={"ID":"03b6f9be-40d3-47be-964f-e271e14a0d84","Type":"ContainerStarted","Data":"f8c624c117e6abf9da5695665617cd2d30f1b52cc7e48320ed275de5f5ebb64d"} Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.971291 4769 generic.go:334] "Generic (PLEG): container finished" podID="8263a2a5-f139-46de-bc0c-3457a8ffee1a" containerID="56a3a9c189436c45d04f7514e8736665125953129e88bdb092d761c3c998269b" exitCode=0 Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.971373 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w6j2v" event={"ID":"8263a2a5-f139-46de-bc0c-3457a8ffee1a","Type":"ContainerDied","Data":"56a3a9c189436c45d04f7514e8736665125953129e88bdb092d761c3c998269b"} Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.971407 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w6j2v" event={"ID":"8263a2a5-f139-46de-bc0c-3457a8ffee1a","Type":"ContainerStarted","Data":"0a9da4a939fcde03dec276cc6d643c2b95db457d7f0bac3e75b63108231e3cae"} Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.976193 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"2032c423974dbab8c20da1fdd5d30d16257a52b86d3360734ccad16422936515"} Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.976231 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"a96a111f3d7795d1c3bb45f3549921754bddbac6ed6eb7b4a36823814d1714b6"} Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.977635 4769 generic.go:334] "Generic (PLEG): container finished" podID="7e6a4579-eb83-4d6a-979b-375d1844efd0" containerID="aaf545cfe19bb0ecc8004b06dfc4c31ce6f753a526ab5193183bd41b3f22ea30" exitCode=0 Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.977797 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qx4qw" event={"ID":"7e6a4579-eb83-4d6a-979b-375d1844efd0","Type":"ContainerDied","Data":"aaf545cfe19bb0ecc8004b06dfc4c31ce6f753a526ab5193183bd41b3f22ea30"} Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.989935 4769 generic.go:334] "Generic (PLEG): container finished" podID="484e70cd-c319-46ae-8936-54c215567b03" containerID="bba27db932740fc3066c4e967ddc4180c87e3307f4f532f7cac6ef6588759196" exitCode=0 Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.989977 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xhzkf" event={"ID":"484e70cd-c319-46ae-8936-54c215567b03","Type":"ContainerDied","Data":"bba27db932740fc3066c4e967ddc4180c87e3307f4f532f7cac6ef6588759196"} Oct 06 07:31:39 crc kubenswrapper[4769]: I1006 07:31:39.989998 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xhzkf" event={"ID":"484e70cd-c319-46ae-8936-54c215567b03","Type":"ContainerStarted","Data":"148bd49a3b008cf8072d75b5a29cf1001cb80640fe20d748a940821677fa2440"} Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.003747 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"f4b193b1e553457f468b22def69a75aedfd554e5f69bf9395088a6dd432700c8"} Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.003797 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"aacc33be5e05002f9880431faf6f948abc9b6b09bb3524f251047ae0d6a3d058"} Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.003807 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"f4cac39f4d66162ac1761a7378aef091283dcc5d023114da2255fc948454abab"} Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.003816 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"3cdfb423411f78b8d50cf631dc245c48979542710f20c0e7dd636a477421313d"} Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.003824 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"3e7faec1aff8ef978b3537fa8ad2abe67c2840b5cecf7b7a9bfb50a7a3301c87"} Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.321906 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qx4qw" Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.398447 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w6j2v" Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.400990 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xhzkf" Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.432864 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59fzq\" (UniqueName: \"kubernetes.io/projected/7e6a4579-eb83-4d6a-979b-375d1844efd0-kube-api-access-59fzq\") pod \"7e6a4579-eb83-4d6a-979b-375d1844efd0\" (UID: \"7e6a4579-eb83-4d6a-979b-375d1844efd0\") " Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.438056 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6a4579-eb83-4d6a-979b-375d1844efd0-kube-api-access-59fzq" (OuterVolumeSpecName: "kube-api-access-59fzq") pod "7e6a4579-eb83-4d6a-979b-375d1844efd0" (UID: "7e6a4579-eb83-4d6a-979b-375d1844efd0"). InnerVolumeSpecName "kube-api-access-59fzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.534450 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g5m4\" (UniqueName: \"kubernetes.io/projected/484e70cd-c319-46ae-8936-54c215567b03-kube-api-access-4g5m4\") pod \"484e70cd-c319-46ae-8936-54c215567b03\" (UID: \"484e70cd-c319-46ae-8936-54c215567b03\") " Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.534553 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcq8h\" (UniqueName: \"kubernetes.io/projected/8263a2a5-f139-46de-bc0c-3457a8ffee1a-kube-api-access-vcq8h\") pod \"8263a2a5-f139-46de-bc0c-3457a8ffee1a\" (UID: \"8263a2a5-f139-46de-bc0c-3457a8ffee1a\") " Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.534830 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59fzq\" (UniqueName: \"kubernetes.io/projected/7e6a4579-eb83-4d6a-979b-375d1844efd0-kube-api-access-59fzq\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.537696 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8263a2a5-f139-46de-bc0c-3457a8ffee1a-kube-api-access-vcq8h" (OuterVolumeSpecName: "kube-api-access-vcq8h") pod "8263a2a5-f139-46de-bc0c-3457a8ffee1a" (UID: "8263a2a5-f139-46de-bc0c-3457a8ffee1a"). InnerVolumeSpecName "kube-api-access-vcq8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.537976 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484e70cd-c319-46ae-8936-54c215567b03-kube-api-access-4g5m4" (OuterVolumeSpecName: "kube-api-access-4g5m4") pod "484e70cd-c319-46ae-8936-54c215567b03" (UID: "484e70cd-c319-46ae-8936-54c215567b03"). InnerVolumeSpecName "kube-api-access-4g5m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.636263 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcq8h\" (UniqueName: \"kubernetes.io/projected/8263a2a5-f139-46de-bc0c-3457a8ffee1a-kube-api-access-vcq8h\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:41 crc kubenswrapper[4769]: I1006 07:31:41.636291 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g5m4\" (UniqueName: \"kubernetes.io/projected/484e70cd-c319-46ae-8936-54c215567b03-kube-api-access-4g5m4\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.013914 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-w6j2v" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.013910 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-w6j2v" event={"ID":"8263a2a5-f139-46de-bc0c-3457a8ffee1a","Type":"ContainerDied","Data":"0a9da4a939fcde03dec276cc6d643c2b95db457d7f0bac3e75b63108231e3cae"} Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.014332 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a9da4a939fcde03dec276cc6d643c2b95db457d7f0bac3e75b63108231e3cae" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.021513 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"c7fc55dad5784e022da4cf23e38e4e4c02ab3c6f9aa11168793702e837879b32"} Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.021631 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"91c9fe86-3c6f-485c-94a9-5adc4a88d14f","Type":"ContainerStarted","Data":"533266567a2ee1c64aec95b5848e47cc9f646dfed95bd214555bfca3003d76b8"} Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.026687 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qx4qw" event={"ID":"7e6a4579-eb83-4d6a-979b-375d1844efd0","Type":"ContainerDied","Data":"81cfc5361a8653d516e507fcab4abc808c175f2c5b44e03bd27db7bb9fdb2ba0"} Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.026708 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qx4qw" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.026722 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81cfc5361a8653d516e507fcab4abc808c175f2c5b44e03bd27db7bb9fdb2ba0" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.028763 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-xhzkf" event={"ID":"484e70cd-c319-46ae-8936-54c215567b03","Type":"ContainerDied","Data":"148bd49a3b008cf8072d75b5a29cf1001cb80640fe20d748a940821677fa2440"} Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.028783 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="148bd49a3b008cf8072d75b5a29cf1001cb80640fe20d748a940821677fa2440" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.028842 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-xhzkf" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.065406 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=35.898055286 podStartE2EDuration="40.065390433s" podCreationTimestamp="2025-10-06 07:31:02 +0000 UTC" firstStartedPulling="2025-10-06 07:31:35.875281046 +0000 UTC m=+892.399562183" lastFinishedPulling="2025-10-06 07:31:40.042616183 +0000 UTC m=+896.566897330" observedRunningTime="2025-10-06 07:31:42.059087811 +0000 UTC m=+898.583368958" watchObservedRunningTime="2025-10-06 07:31:42.065390433 +0000 UTC m=+898.589671580" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.312960 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64fcb8f85-w8rj7"] Oct 06 07:31:42 crc kubenswrapper[4769]: E1006 07:31:42.313392 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484e70cd-c319-46ae-8936-54c215567b03" containerName="mariadb-database-create" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.313415 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="484e70cd-c319-46ae-8936-54c215567b03" containerName="mariadb-database-create" Oct 06 07:31:42 crc kubenswrapper[4769]: E1006 07:31:42.313452 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6a4579-eb83-4d6a-979b-375d1844efd0" containerName="mariadb-database-create" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.313461 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6a4579-eb83-4d6a-979b-375d1844efd0" containerName="mariadb-database-create" Oct 06 07:31:42 crc kubenswrapper[4769]: E1006 07:31:42.313487 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8263a2a5-f139-46de-bc0c-3457a8ffee1a" containerName="mariadb-database-create" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.313497 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8263a2a5-f139-46de-bc0c-3457a8ffee1a" containerName="mariadb-database-create" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.313683 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6a4579-eb83-4d6a-979b-375d1844efd0" containerName="mariadb-database-create" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.313747 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="484e70cd-c319-46ae-8936-54c215567b03" containerName="mariadb-database-create" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.313762 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8263a2a5-f139-46de-bc0c-3457a8ffee1a" containerName="mariadb-database-create" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.315850 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.322108 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.324562 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64fcb8f85-w8rj7"] Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.448838 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkgsp\" (UniqueName: \"kubernetes.io/projected/49491217-3434-4f19-8d20-7582bb6a16c6-kube-api-access-pkgsp\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.449040 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-ovsdbserver-nb\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.449171 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-config\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.449216 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-ovsdbserver-sb\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.449246 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-svc\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.449498 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-swift-storage-0\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.550547 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkgsp\" (UniqueName: \"kubernetes.io/projected/49491217-3434-4f19-8d20-7582bb6a16c6-kube-api-access-pkgsp\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.550699 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-ovsdbserver-nb\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.550740 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-config\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.550781 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-ovsdbserver-sb\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.550802 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-svc\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.550865 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-swift-storage-0\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.551759 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-swift-storage-0\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.551758 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-config\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.551821 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-svc\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.552212 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-ovsdbserver-nb\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.552382 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-ovsdbserver-sb\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.576497 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkgsp\" (UniqueName: \"kubernetes.io/projected/49491217-3434-4f19-8d20-7582bb6a16c6-kube-api-access-pkgsp\") pod \"dnsmasq-dns-64fcb8f85-w8rj7\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:42 crc kubenswrapper[4769]: I1006 07:31:42.647458 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:47.996645 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-051d-account-create-q6v8v"] Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.002809 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-051d-account-create-q6v8v" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.005927 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.014642 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-051d-account-create-q6v8v"] Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.139103 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmb8n\" (UniqueName: \"kubernetes.io/projected/e0c03827-a483-4240-afd5-34c086e9226a-kube-api-access-lmb8n\") pod \"cinder-051d-account-create-q6v8v\" (UID: \"e0c03827-a483-4240-afd5-34c086e9226a\") " pod="openstack/cinder-051d-account-create-q6v8v" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.198045 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fffa-account-create-lhx4k"] Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.199207 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fffa-account-create-lhx4k" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.202861 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.205394 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fffa-account-create-lhx4k"] Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.240889 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmb8n\" (UniqueName: \"kubernetes.io/projected/e0c03827-a483-4240-afd5-34c086e9226a-kube-api-access-lmb8n\") pod \"cinder-051d-account-create-q6v8v\" (UID: \"e0c03827-a483-4240-afd5-34c086e9226a\") " pod="openstack/cinder-051d-account-create-q6v8v" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.260361 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmb8n\" (UniqueName: \"kubernetes.io/projected/e0c03827-a483-4240-afd5-34c086e9226a-kube-api-access-lmb8n\") pod \"cinder-051d-account-create-q6v8v\" (UID: \"e0c03827-a483-4240-afd5-34c086e9226a\") " pod="openstack/cinder-051d-account-create-q6v8v" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.336332 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-051d-account-create-q6v8v" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.342513 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjft7\" (UniqueName: \"kubernetes.io/projected/b3802f25-aa75-4581-afc1-758dea0695d5-kube-api-access-hjft7\") pod \"barbican-fffa-account-create-lhx4k\" (UID: \"b3802f25-aa75-4581-afc1-758dea0695d5\") " pod="openstack/barbican-fffa-account-create-lhx4k" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.403582 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3558-account-create-bv2br"] Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.404893 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3558-account-create-bv2br" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.407634 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.415797 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3558-account-create-bv2br"] Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.444226 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjft7\" (UniqueName: \"kubernetes.io/projected/b3802f25-aa75-4581-afc1-758dea0695d5-kube-api-access-hjft7\") pod \"barbican-fffa-account-create-lhx4k\" (UID: \"b3802f25-aa75-4581-afc1-758dea0695d5\") " pod="openstack/barbican-fffa-account-create-lhx4k" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.462140 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjft7\" (UniqueName: \"kubernetes.io/projected/b3802f25-aa75-4581-afc1-758dea0695d5-kube-api-access-hjft7\") pod \"barbican-fffa-account-create-lhx4k\" (UID: \"b3802f25-aa75-4581-afc1-758dea0695d5\") " pod="openstack/barbican-fffa-account-create-lhx4k" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.517707 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fffa-account-create-lhx4k" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.546011 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slln8\" (UniqueName: \"kubernetes.io/projected/10374278-0491-41ce-8028-9d6a5c8bf677-kube-api-access-slln8\") pod \"neutron-3558-account-create-bv2br\" (UID: \"10374278-0491-41ce-8028-9d6a5c8bf677\") " pod="openstack/neutron-3558-account-create-bv2br" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.647130 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slln8\" (UniqueName: \"kubernetes.io/projected/10374278-0491-41ce-8028-9d6a5c8bf677-kube-api-access-slln8\") pod \"neutron-3558-account-create-bv2br\" (UID: \"10374278-0491-41ce-8028-9d6a5c8bf677\") " pod="openstack/neutron-3558-account-create-bv2br" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.680970 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slln8\" (UniqueName: \"kubernetes.io/projected/10374278-0491-41ce-8028-9d6a5c8bf677-kube-api-access-slln8\") pod \"neutron-3558-account-create-bv2br\" (UID: \"10374278-0491-41ce-8028-9d6a5c8bf677\") " pod="openstack/neutron-3558-account-create-bv2br" Oct 06 07:31:48 crc kubenswrapper[4769]: I1006 07:31:48.762718 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3558-account-create-bv2br" Oct 06 07:31:51 crc kubenswrapper[4769]: I1006 07:31:51.957100 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64fcb8f85-w8rj7"] Oct 06 07:31:51 crc kubenswrapper[4769]: W1006 07:31:51.959230 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49491217_3434_4f19_8d20_7582bb6a16c6.slice/crio-c62eb9b618df87c6c92108546008b7a59fd2502a677a5d40c3028f6f2c2c3981 WatchSource:0}: Error finding container c62eb9b618df87c6c92108546008b7a59fd2502a677a5d40c3028f6f2c2c3981: Status 404 returned error can't find the container with id c62eb9b618df87c6c92108546008b7a59fd2502a677a5d40c3028f6f2c2c3981 Oct 06 07:31:52 crc kubenswrapper[4769]: I1006 07:31:52.042465 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-051d-account-create-q6v8v"] Oct 06 07:31:52 crc kubenswrapper[4769]: W1006 07:31:52.050552 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0c03827_a483_4240_afd5_34c086e9226a.slice/crio-51dd3ea1d8a905a7763c278b07acb14f81e82425c5bc3b109d8c1ca35d64bda2 WatchSource:0}: Error finding container 51dd3ea1d8a905a7763c278b07acb14f81e82425c5bc3b109d8c1ca35d64bda2: Status 404 returned error can't find the container with id 51dd3ea1d8a905a7763c278b07acb14f81e82425c5bc3b109d8c1ca35d64bda2 Oct 06 07:31:52 crc kubenswrapper[4769]: I1006 07:31:52.090045 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fffa-account-create-lhx4k"] Oct 06 07:31:52 crc kubenswrapper[4769]: W1006 07:31:52.094195 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3802f25_aa75_4581_afc1_758dea0695d5.slice/crio-57eae32b12220027f02e2b207b157877b071995128ef2c4ec9deea04ee1daaff WatchSource:0}: Error finding container 57eae32b12220027f02e2b207b157877b071995128ef2c4ec9deea04ee1daaff: Status 404 returned error can't find the container with id 57eae32b12220027f02e2b207b157877b071995128ef2c4ec9deea04ee1daaff Oct 06 07:31:52 crc kubenswrapper[4769]: I1006 07:31:52.141117 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3558-account-create-bv2br"] Oct 06 07:31:52 crc kubenswrapper[4769]: W1006 07:31:52.146725 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10374278_0491_41ce_8028_9d6a5c8bf677.slice/crio-53e3c601c7082d76049f24ecbee240cf351170316dddaa767b7a0410c0050132 WatchSource:0}: Error finding container 53e3c601c7082d76049f24ecbee240cf351170316dddaa767b7a0410c0050132: Status 404 returned error can't find the container with id 53e3c601c7082d76049f24ecbee240cf351170316dddaa767b7a0410c0050132 Oct 06 07:31:52 crc kubenswrapper[4769]: I1006 07:31:52.148128 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fffa-account-create-lhx4k" event={"ID":"b3802f25-aa75-4581-afc1-758dea0695d5","Type":"ContainerStarted","Data":"57eae32b12220027f02e2b207b157877b071995128ef2c4ec9deea04ee1daaff"} Oct 06 07:31:52 crc kubenswrapper[4769]: I1006 07:31:52.154110 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x9tdl" event={"ID":"03b6f9be-40d3-47be-964f-e271e14a0d84","Type":"ContainerStarted","Data":"f9bfa47e16b68e1996cffe7d0c916889cc632779b05f93c09d859f9fae755623"} Oct 06 07:31:52 crc kubenswrapper[4769]: I1006 07:31:52.161625 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-051d-account-create-q6v8v" event={"ID":"e0c03827-a483-4240-afd5-34c086e9226a","Type":"ContainerStarted","Data":"51dd3ea1d8a905a7763c278b07acb14f81e82425c5bc3b109d8c1ca35d64bda2"} Oct 06 07:31:52 crc kubenswrapper[4769]: I1006 07:31:52.176734 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x9tdl" podStartSLOduration=1.599781461 podStartE2EDuration="14.17671141s" podCreationTimestamp="2025-10-06 07:31:38 +0000 UTC" firstStartedPulling="2025-10-06 07:31:39.035899601 +0000 UTC m=+895.560180748" lastFinishedPulling="2025-10-06 07:31:51.61282955 +0000 UTC m=+908.137110697" observedRunningTime="2025-10-06 07:31:52.167699923 +0000 UTC m=+908.691981070" watchObservedRunningTime="2025-10-06 07:31:52.17671141 +0000 UTC m=+908.700992557" Oct 06 07:31:52 crc kubenswrapper[4769]: I1006 07:31:52.180002 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" event={"ID":"49491217-3434-4f19-8d20-7582bb6a16c6","Type":"ContainerStarted","Data":"c62eb9b618df87c6c92108546008b7a59fd2502a677a5d40c3028f6f2c2c3981"} Oct 06 07:31:52 crc kubenswrapper[4769]: I1006 07:31:52.246102 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:31:52 crc kubenswrapper[4769]: I1006 07:31:52.246170 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:31:53 crc kubenswrapper[4769]: I1006 07:31:53.178967 4769 generic.go:334] "Generic (PLEG): container finished" podID="b3802f25-aa75-4581-afc1-758dea0695d5" containerID="351ee1d2b7d8673491c67436b8edd38490d0fd8f258d97a932d5ed02240e277f" exitCode=0 Oct 06 07:31:53 crc kubenswrapper[4769]: I1006 07:31:53.179042 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fffa-account-create-lhx4k" event={"ID":"b3802f25-aa75-4581-afc1-758dea0695d5","Type":"ContainerDied","Data":"351ee1d2b7d8673491c67436b8edd38490d0fd8f258d97a932d5ed02240e277f"} Oct 06 07:31:53 crc kubenswrapper[4769]: I1006 07:31:53.181525 4769 generic.go:334] "Generic (PLEG): container finished" podID="49491217-3434-4f19-8d20-7582bb6a16c6" containerID="0c222336685823599bd76e1fc511871b92aee29581caa0c3d4846e88f83c1a46" exitCode=0 Oct 06 07:31:53 crc kubenswrapper[4769]: I1006 07:31:53.181586 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" event={"ID":"49491217-3434-4f19-8d20-7582bb6a16c6","Type":"ContainerDied","Data":"0c222336685823599bd76e1fc511871b92aee29581caa0c3d4846e88f83c1a46"} Oct 06 07:31:53 crc kubenswrapper[4769]: I1006 07:31:53.183812 4769 generic.go:334] "Generic (PLEG): container finished" podID="e0c03827-a483-4240-afd5-34c086e9226a" containerID="7f2ac8383615cce778273ed66df49510021c22f3683da21635506b0067f46184" exitCode=0 Oct 06 07:31:53 crc kubenswrapper[4769]: I1006 07:31:53.183885 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-051d-account-create-q6v8v" event={"ID":"e0c03827-a483-4240-afd5-34c086e9226a","Type":"ContainerDied","Data":"7f2ac8383615cce778273ed66df49510021c22f3683da21635506b0067f46184"} Oct 06 07:31:53 crc kubenswrapper[4769]: I1006 07:31:53.185539 4769 generic.go:334] "Generic (PLEG): container finished" podID="10374278-0491-41ce-8028-9d6a5c8bf677" containerID="16af253dc406622988b38eb18b6229779a434a3a3946dd2712943eeb0222f8ad" exitCode=0 Oct 06 07:31:53 crc kubenswrapper[4769]: I1006 07:31:53.185647 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3558-account-create-bv2br" event={"ID":"10374278-0491-41ce-8028-9d6a5c8bf677","Type":"ContainerDied","Data":"16af253dc406622988b38eb18b6229779a434a3a3946dd2712943eeb0222f8ad"} Oct 06 07:31:53 crc kubenswrapper[4769]: I1006 07:31:53.185663 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3558-account-create-bv2br" event={"ID":"10374278-0491-41ce-8028-9d6a5c8bf677","Type":"ContainerStarted","Data":"53e3c601c7082d76049f24ecbee240cf351170316dddaa767b7a0410c0050132"} Oct 06 07:31:53 crc kubenswrapper[4769]: I1006 07:31:53.207322 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mqvfk" event={"ID":"586f85b0-c0bf-473c-9446-b9263c3ff95b","Type":"ContainerStarted","Data":"4fbf4553296c59fe875e546c9ee0c8c3961d8508f77f4fec5e2b65ad16682b4e"} Oct 06 07:31:53 crc kubenswrapper[4769]: I1006 07:31:53.286605 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mqvfk" podStartSLOduration=3.094736024 podStartE2EDuration="23.28657583s" podCreationTimestamp="2025-10-06 07:31:30 +0000 UTC" firstStartedPulling="2025-10-06 07:31:31.421023865 +0000 UTC m=+887.945305012" lastFinishedPulling="2025-10-06 07:31:51.612863671 +0000 UTC m=+908.137144818" observedRunningTime="2025-10-06 07:31:53.278983492 +0000 UTC m=+909.803264639" watchObservedRunningTime="2025-10-06 07:31:53.28657583 +0000 UTC m=+909.810857017" Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.223913 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" event={"ID":"49491217-3434-4f19-8d20-7582bb6a16c6","Type":"ContainerStarted","Data":"9463b2a173aeeb184ff71e3c26a548c6aeaeb9c664a90694bad8442a2fdbe0a2"} Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.270194 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" podStartSLOduration=12.270149738 podStartE2EDuration="12.270149738s" podCreationTimestamp="2025-10-06 07:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:31:54.256750142 +0000 UTC m=+910.781031359" watchObservedRunningTime="2025-10-06 07:31:54.270149738 +0000 UTC m=+910.794430915" Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.618859 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-051d-account-create-q6v8v" Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.625208 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3558-account-create-bv2br" Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.635225 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fffa-account-create-lhx4k" Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.785092 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjft7\" (UniqueName: \"kubernetes.io/projected/b3802f25-aa75-4581-afc1-758dea0695d5-kube-api-access-hjft7\") pod \"b3802f25-aa75-4581-afc1-758dea0695d5\" (UID: \"b3802f25-aa75-4581-afc1-758dea0695d5\") " Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.785181 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmb8n\" (UniqueName: \"kubernetes.io/projected/e0c03827-a483-4240-afd5-34c086e9226a-kube-api-access-lmb8n\") pod \"e0c03827-a483-4240-afd5-34c086e9226a\" (UID: \"e0c03827-a483-4240-afd5-34c086e9226a\") " Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.785305 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slln8\" (UniqueName: \"kubernetes.io/projected/10374278-0491-41ce-8028-9d6a5c8bf677-kube-api-access-slln8\") pod \"10374278-0491-41ce-8028-9d6a5c8bf677\" (UID: \"10374278-0491-41ce-8028-9d6a5c8bf677\") " Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.790374 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c03827-a483-4240-afd5-34c086e9226a-kube-api-access-lmb8n" (OuterVolumeSpecName: "kube-api-access-lmb8n") pod "e0c03827-a483-4240-afd5-34c086e9226a" (UID: "e0c03827-a483-4240-afd5-34c086e9226a"). InnerVolumeSpecName "kube-api-access-lmb8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.790921 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3802f25-aa75-4581-afc1-758dea0695d5-kube-api-access-hjft7" (OuterVolumeSpecName: "kube-api-access-hjft7") pod "b3802f25-aa75-4581-afc1-758dea0695d5" (UID: "b3802f25-aa75-4581-afc1-758dea0695d5"). InnerVolumeSpecName "kube-api-access-hjft7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.798316 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10374278-0491-41ce-8028-9d6a5c8bf677-kube-api-access-slln8" (OuterVolumeSpecName: "kube-api-access-slln8") pod "10374278-0491-41ce-8028-9d6a5c8bf677" (UID: "10374278-0491-41ce-8028-9d6a5c8bf677"). InnerVolumeSpecName "kube-api-access-slln8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.886786 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjft7\" (UniqueName: \"kubernetes.io/projected/b3802f25-aa75-4581-afc1-758dea0695d5-kube-api-access-hjft7\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.886836 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmb8n\" (UniqueName: \"kubernetes.io/projected/e0c03827-a483-4240-afd5-34c086e9226a-kube-api-access-lmb8n\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:54 crc kubenswrapper[4769]: I1006 07:31:54.886855 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slln8\" (UniqueName: \"kubernetes.io/projected/10374278-0491-41ce-8028-9d6a5c8bf677-kube-api-access-slln8\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:55 crc kubenswrapper[4769]: I1006 07:31:55.231863 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fffa-account-create-lhx4k" event={"ID":"b3802f25-aa75-4581-afc1-758dea0695d5","Type":"ContainerDied","Data":"57eae32b12220027f02e2b207b157877b071995128ef2c4ec9deea04ee1daaff"} Oct 06 07:31:55 crc kubenswrapper[4769]: I1006 07:31:55.231871 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fffa-account-create-lhx4k" Oct 06 07:31:55 crc kubenswrapper[4769]: I1006 07:31:55.231904 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57eae32b12220027f02e2b207b157877b071995128ef2c4ec9deea04ee1daaff" Oct 06 07:31:55 crc kubenswrapper[4769]: I1006 07:31:55.233153 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-051d-account-create-q6v8v" event={"ID":"e0c03827-a483-4240-afd5-34c086e9226a","Type":"ContainerDied","Data":"51dd3ea1d8a905a7763c278b07acb14f81e82425c5bc3b109d8c1ca35d64bda2"} Oct 06 07:31:55 crc kubenswrapper[4769]: I1006 07:31:55.233188 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-051d-account-create-q6v8v" Oct 06 07:31:55 crc kubenswrapper[4769]: I1006 07:31:55.233190 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51dd3ea1d8a905a7763c278b07acb14f81e82425c5bc3b109d8c1ca35d64bda2" Oct 06 07:31:55 crc kubenswrapper[4769]: I1006 07:31:55.235203 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3558-account-create-bv2br" Oct 06 07:31:55 crc kubenswrapper[4769]: I1006 07:31:55.235250 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3558-account-create-bv2br" event={"ID":"10374278-0491-41ce-8028-9d6a5c8bf677","Type":"ContainerDied","Data":"53e3c601c7082d76049f24ecbee240cf351170316dddaa767b7a0410c0050132"} Oct 06 07:31:55 crc kubenswrapper[4769]: I1006 07:31:55.235270 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53e3c601c7082d76049f24ecbee240cf351170316dddaa767b7a0410c0050132" Oct 06 07:31:55 crc kubenswrapper[4769]: I1006 07:31:55.235296 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:57 crc kubenswrapper[4769]: I1006 07:31:57.254952 4769 generic.go:334] "Generic (PLEG): container finished" podID="03b6f9be-40d3-47be-964f-e271e14a0d84" containerID="f9bfa47e16b68e1996cffe7d0c916889cc632779b05f93c09d859f9fae755623" exitCode=0 Oct 06 07:31:57 crc kubenswrapper[4769]: I1006 07:31:57.255038 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x9tdl" event={"ID":"03b6f9be-40d3-47be-964f-e271e14a0d84","Type":"ContainerDied","Data":"f9bfa47e16b68e1996cffe7d0c916889cc632779b05f93c09d859f9fae755623"} Oct 06 07:31:58 crc kubenswrapper[4769]: I1006 07:31:58.536110 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:58 crc kubenswrapper[4769]: I1006 07:31:58.651837 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b6f9be-40d3-47be-964f-e271e14a0d84-config-data\") pod \"03b6f9be-40d3-47be-964f-e271e14a0d84\" (UID: \"03b6f9be-40d3-47be-964f-e271e14a0d84\") " Oct 06 07:31:58 crc kubenswrapper[4769]: I1006 07:31:58.651946 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b6f9be-40d3-47be-964f-e271e14a0d84-combined-ca-bundle\") pod \"03b6f9be-40d3-47be-964f-e271e14a0d84\" (UID: \"03b6f9be-40d3-47be-964f-e271e14a0d84\") " Oct 06 07:31:58 crc kubenswrapper[4769]: I1006 07:31:58.652037 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvzhw\" (UniqueName: \"kubernetes.io/projected/03b6f9be-40d3-47be-964f-e271e14a0d84-kube-api-access-gvzhw\") pod \"03b6f9be-40d3-47be-964f-e271e14a0d84\" (UID: \"03b6f9be-40d3-47be-964f-e271e14a0d84\") " Oct 06 07:31:58 crc kubenswrapper[4769]: I1006 07:31:58.657673 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b6f9be-40d3-47be-964f-e271e14a0d84-kube-api-access-gvzhw" (OuterVolumeSpecName: "kube-api-access-gvzhw") pod "03b6f9be-40d3-47be-964f-e271e14a0d84" (UID: "03b6f9be-40d3-47be-964f-e271e14a0d84"). InnerVolumeSpecName "kube-api-access-gvzhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:31:58 crc kubenswrapper[4769]: I1006 07:31:58.681037 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b6f9be-40d3-47be-964f-e271e14a0d84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03b6f9be-40d3-47be-964f-e271e14a0d84" (UID: "03b6f9be-40d3-47be-964f-e271e14a0d84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:31:58 crc kubenswrapper[4769]: I1006 07:31:58.706086 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b6f9be-40d3-47be-964f-e271e14a0d84-config-data" (OuterVolumeSpecName: "config-data") pod "03b6f9be-40d3-47be-964f-e271e14a0d84" (UID: "03b6f9be-40d3-47be-964f-e271e14a0d84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:31:58 crc kubenswrapper[4769]: I1006 07:31:58.753532 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvzhw\" (UniqueName: \"kubernetes.io/projected/03b6f9be-40d3-47be-964f-e271e14a0d84-kube-api-access-gvzhw\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:58 crc kubenswrapper[4769]: I1006 07:31:58.753563 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03b6f9be-40d3-47be-964f-e271e14a0d84-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:58 crc kubenswrapper[4769]: I1006 07:31:58.753572 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03b6f9be-40d3-47be-964f-e271e14a0d84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.273734 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x9tdl" event={"ID":"03b6f9be-40d3-47be-964f-e271e14a0d84","Type":"ContainerDied","Data":"f8c624c117e6abf9da5695665617cd2d30f1b52cc7e48320ed275de5f5ebb64d"} Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.273781 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8c624c117e6abf9da5695665617cd2d30f1b52cc7e48320ed275de5f5ebb64d" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.273806 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x9tdl" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.581092 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64fcb8f85-w8rj7"] Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.581584 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" podUID="49491217-3434-4f19-8d20-7582bb6a16c6" containerName="dnsmasq-dns" containerID="cri-o://9463b2a173aeeb184ff71e3c26a548c6aeaeb9c664a90694bad8442a2fdbe0a2" gracePeriod=10 Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.584593 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.613988 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l4gqp"] Oct 06 07:31:59 crc kubenswrapper[4769]: E1006 07:31:59.614311 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b6f9be-40d3-47be-964f-e271e14a0d84" containerName="keystone-db-sync" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.614327 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b6f9be-40d3-47be-964f-e271e14a0d84" containerName="keystone-db-sync" Oct 06 07:31:59 crc kubenswrapper[4769]: E1006 07:31:59.614336 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c03827-a483-4240-afd5-34c086e9226a" containerName="mariadb-account-create" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.614343 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c03827-a483-4240-afd5-34c086e9226a" containerName="mariadb-account-create" Oct 06 07:31:59 crc kubenswrapper[4769]: E1006 07:31:59.614357 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3802f25-aa75-4581-afc1-758dea0695d5" containerName="mariadb-account-create" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.614363 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3802f25-aa75-4581-afc1-758dea0695d5" containerName="mariadb-account-create" Oct 06 07:31:59 crc kubenswrapper[4769]: E1006 07:31:59.614379 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10374278-0491-41ce-8028-9d6a5c8bf677" containerName="mariadb-account-create" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.614384 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="10374278-0491-41ce-8028-9d6a5c8bf677" containerName="mariadb-account-create" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.614561 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="10374278-0491-41ce-8028-9d6a5c8bf677" containerName="mariadb-account-create" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.614576 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b6f9be-40d3-47be-964f-e271e14a0d84" containerName="keystone-db-sync" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.614584 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3802f25-aa75-4581-afc1-758dea0695d5" containerName="mariadb-account-create" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.614596 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c03827-a483-4240-afd5-34c086e9226a" containerName="mariadb-account-create" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.615109 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.620923 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wgm6x" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.621163 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.621354 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.621548 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.623350 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75d46dcdf-x65rz"] Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.624998 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.652062 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l4gqp"] Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.665661 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75d46dcdf-x65rz"] Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.771970 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfdsq\" (UniqueName: \"kubernetes.io/projected/f3770735-65bc-4dc8-9ff1-92988a3f8327-kube-api-access-lfdsq\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.772015 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-fernet-keys\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.772036 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-dns-svc\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.772071 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-config-data\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.772087 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-dns-swift-storage-0\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.772120 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-ovsdbserver-nb\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.772138 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-credential-keys\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.772162 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-config\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.772196 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-combined-ca-bundle\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.772216 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-ovsdbserver-sb\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.772238 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcbmg\" (UniqueName: \"kubernetes.io/projected/d21837ef-5a05-4a03-a065-0467ccfc5413-kube-api-access-dcbmg\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.772265 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-scripts\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.874049 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfdsq\" (UniqueName: \"kubernetes.io/projected/f3770735-65bc-4dc8-9ff1-92988a3f8327-kube-api-access-lfdsq\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.874119 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-fernet-keys\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.874141 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-dns-svc\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.874182 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-config-data\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.874203 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-dns-swift-storage-0\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.874237 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-ovsdbserver-nb\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.874262 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-credential-keys\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.874291 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-config\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.874331 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-combined-ca-bundle\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.874359 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-ovsdbserver-sb\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.874386 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcbmg\" (UniqueName: \"kubernetes.io/projected/d21837ef-5a05-4a03-a065-0467ccfc5413-kube-api-access-dcbmg\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.874433 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-scripts\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.878470 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-config\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.880108 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-ovsdbserver-sb\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.880908 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-dns-svc\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.881407 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-dns-swift-storage-0\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.887192 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-ovsdbserver-nb\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.900130 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-fernet-keys\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.904825 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-scripts\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.907137 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-credential-keys\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.922685 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-thnwc"] Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.923548 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-combined-ca-bundle\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.923828 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-config-data\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.924972 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-thnwc" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.948866 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 06 07:31:59 crc kubenswrapper[4769]: I1006 07:31:59.949375 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4dvbm" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.009539 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfdsq\" (UniqueName: \"kubernetes.io/projected/f3770735-65bc-4dc8-9ff1-92988a3f8327-kube-api-access-lfdsq\") pod \"dnsmasq-dns-75d46dcdf-x65rz\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.009730 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.010626 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcbmg\" (UniqueName: \"kubernetes.io/projected/d21837ef-5a05-4a03-a065-0467ccfc5413-kube-api-access-dcbmg\") pod \"keystone-bootstrap-l4gqp\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.024204 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-thnwc"] Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.027331 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d89b00e-b803-4c22-b820-653e98f239b0-combined-ca-bundle\") pod \"neutron-db-sync-thnwc\" (UID: \"7d89b00e-b803-4c22-b820-653e98f239b0\") " pod="openstack/neutron-db-sync-thnwc" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.027507 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d89b00e-b803-4c22-b820-653e98f239b0-config\") pod \"neutron-db-sync-thnwc\" (UID: \"7d89b00e-b803-4c22-b820-653e98f239b0\") " pod="openstack/neutron-db-sync-thnwc" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.027560 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd59p\" (UniqueName: \"kubernetes.io/projected/7d89b00e-b803-4c22-b820-653e98f239b0-kube-api-access-qd59p\") pod \"neutron-db-sync-thnwc\" (UID: \"7d89b00e-b803-4c22-b820-653e98f239b0\") " pod="openstack/neutron-db-sync-thnwc" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.046478 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qtgml"] Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.048109 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.063664 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.065652 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.065822 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.079687 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hll9s" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.094412 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qtgml"] Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.094772 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.123309 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.123517 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133119 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-scripts\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133485 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2vhg\" (UniqueName: \"kubernetes.io/projected/1dc18379-7117-430b-9d0f-65115eaedf51-kube-api-access-q2vhg\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133511 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-combined-ca-bundle\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133545 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d89b00e-b803-4c22-b820-653e98f239b0-config\") pod \"neutron-db-sync-thnwc\" (UID: \"7d89b00e-b803-4c22-b820-653e98f239b0\") " pod="openstack/neutron-db-sync-thnwc" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133569 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-config-data\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133593 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd59p\" (UniqueName: \"kubernetes.io/projected/7d89b00e-b803-4c22-b820-653e98f239b0-kube-api-access-qd59p\") pod \"neutron-db-sync-thnwc\" (UID: \"7d89b00e-b803-4c22-b820-653e98f239b0\") " pod="openstack/neutron-db-sync-thnwc" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133611 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql7d2\" (UniqueName: \"kubernetes.io/projected/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-kube-api-access-ql7d2\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133646 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-run-httpd\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133666 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1dc18379-7117-430b-9d0f-65115eaedf51-etc-machine-id\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133685 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-config-data\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133713 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-db-sync-config-data\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133731 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133747 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-log-httpd\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133772 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-scripts\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133789 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.133806 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d89b00e-b803-4c22-b820-653e98f239b0-combined-ca-bundle\") pod \"neutron-db-sync-thnwc\" (UID: \"7d89b00e-b803-4c22-b820-653e98f239b0\") " pod="openstack/neutron-db-sync-thnwc" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.139358 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d89b00e-b803-4c22-b820-653e98f239b0-config\") pod \"neutron-db-sync-thnwc\" (UID: \"7d89b00e-b803-4c22-b820-653e98f239b0\") " pod="openstack/neutron-db-sync-thnwc" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.148521 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.155017 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d89b00e-b803-4c22-b820-653e98f239b0-combined-ca-bundle\") pod \"neutron-db-sync-thnwc\" (UID: \"7d89b00e-b803-4c22-b820-653e98f239b0\") " pod="openstack/neutron-db-sync-thnwc" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.162046 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd59p\" (UniqueName: \"kubernetes.io/projected/7d89b00e-b803-4c22-b820-653e98f239b0-kube-api-access-qd59p\") pod \"neutron-db-sync-thnwc\" (UID: \"7d89b00e-b803-4c22-b820-653e98f239b0\") " pod="openstack/neutron-db-sync-thnwc" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.221252 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75d46dcdf-x65rz"] Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.221860 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.232705 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2ctln"] Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.233977 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235298 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-run-httpd\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235341 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1dc18379-7117-430b-9d0f-65115eaedf51-etc-machine-id\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235360 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-config-data\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235395 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-combined-ca-bundle\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235435 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-db-sync-config-data\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235453 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b7fe511-77bf-4a50-b42a-3dee332f2a69-logs\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235479 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235503 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-log-httpd\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235533 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-scripts\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235550 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235573 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjsnl\" (UniqueName: \"kubernetes.io/projected/1b7fe511-77bf-4a50-b42a-3dee332f2a69-kube-api-access-gjsnl\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235606 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-scripts\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235627 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2vhg\" (UniqueName: \"kubernetes.io/projected/1dc18379-7117-430b-9d0f-65115eaedf51-kube-api-access-q2vhg\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235659 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-combined-ca-bundle\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235701 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-config-data\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235736 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql7d2\" (UniqueName: \"kubernetes.io/projected/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-kube-api-access-ql7d2\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235754 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-config-data\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235833 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-scripts\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.235911 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1dc18379-7117-430b-9d0f-65115eaedf51-etc-machine-id\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.241153 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-run-httpd\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.247490 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-config-data\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.249551 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2k6x8" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.251934 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.252309 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.252517 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.253565 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-combined-ca-bundle\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.254894 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-config-data\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.254895 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-log-httpd\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.255545 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-db-sync-config-data\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.258579 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2ctln"] Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.266714 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-scripts\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.268944 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-scripts\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.285578 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.286249 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.290187 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6486574f69-wqqdq"] Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.292147 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.293589 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql7d2\" (UniqueName: \"kubernetes.io/projected/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-kube-api-access-ql7d2\") pod \"ceilometer-0\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.297797 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2vhg\" (UniqueName: \"kubernetes.io/projected/1dc18379-7117-430b-9d0f-65115eaedf51-kube-api-access-q2vhg\") pod \"cinder-db-sync-qtgml\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.322104 4769 generic.go:334] "Generic (PLEG): container finished" podID="49491217-3434-4f19-8d20-7582bb6a16c6" containerID="9463b2a173aeeb184ff71e3c26a548c6aeaeb9c664a90694bad8442a2fdbe0a2" exitCode=0 Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.322198 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" event={"ID":"49491217-3434-4f19-8d20-7582bb6a16c6","Type":"ContainerDied","Data":"9463b2a173aeeb184ff71e3c26a548c6aeaeb9c664a90694bad8442a2fdbe0a2"} Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.324155 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486574f69-wqqdq"] Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.328566 4769 generic.go:334] "Generic (PLEG): container finished" podID="586f85b0-c0bf-473c-9446-b9263c3ff95b" containerID="4fbf4553296c59fe875e546c9ee0c8c3961d8508f77f4fec5e2b65ad16682b4e" exitCode=0 Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.328622 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mqvfk" event={"ID":"586f85b0-c0bf-473c-9446-b9263c3ff95b","Type":"ContainerDied","Data":"4fbf4553296c59fe875e546c9ee0c8c3961d8508f77f4fec5e2b65ad16682b4e"} Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.332401 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-nl2f8"] Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.333574 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.335182 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.336696 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-l9rjm" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.337528 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-config-data\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.337586 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-scripts\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.337629 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-combined-ca-bundle\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.337666 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b7fe511-77bf-4a50-b42a-3dee332f2a69-logs\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.337721 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjsnl\" (UniqueName: \"kubernetes.io/projected/1b7fe511-77bf-4a50-b42a-3dee332f2a69-kube-api-access-gjsnl\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.343615 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b7fe511-77bf-4a50-b42a-3dee332f2a69-logs\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.345005 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-combined-ca-bundle\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.345393 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-config-data\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.347721 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nl2f8"] Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.347786 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-scripts\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.349399 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.353922 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjsnl\" (UniqueName: \"kubernetes.io/projected/1b7fe511-77bf-4a50-b42a-3dee332f2a69-kube-api-access-gjsnl\") pod \"placement-db-sync-2ctln\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.366749 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-thnwc" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.393527 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.440817 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmv5l\" (UniqueName: \"kubernetes.io/projected/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-kube-api-access-kmv5l\") pod \"barbican-db-sync-nl2f8\" (UID: \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\") " pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.441282 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-ovsdbserver-sb\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.441332 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-config\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.441355 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-combined-ca-bundle\") pod \"barbican-db-sync-nl2f8\" (UID: \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\") " pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.441392 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-ovsdbserver-nb\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.441415 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-dns-svc\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.441475 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-dns-swift-storage-0\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.441549 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-db-sync-config-data\") pod \"barbican-db-sync-nl2f8\" (UID: \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\") " pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.441579 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv8ct\" (UniqueName: \"kubernetes.io/projected/a89f86a5-2e87-4e9d-9650-67cad17f80ef-kube-api-access-qv8ct\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.442565 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.543894 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-config\") pod \"49491217-3434-4f19-8d20-7582bb6a16c6\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.543986 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-ovsdbserver-nb\") pod \"49491217-3434-4f19-8d20-7582bb6a16c6\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544035 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-swift-storage-0\") pod \"49491217-3434-4f19-8d20-7582bb6a16c6\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544093 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkgsp\" (UniqueName: \"kubernetes.io/projected/49491217-3434-4f19-8d20-7582bb6a16c6-kube-api-access-pkgsp\") pod \"49491217-3434-4f19-8d20-7582bb6a16c6\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544116 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-ovsdbserver-sb\") pod \"49491217-3434-4f19-8d20-7582bb6a16c6\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544185 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-svc\") pod \"49491217-3434-4f19-8d20-7582bb6a16c6\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544577 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-db-sync-config-data\") pod \"barbican-db-sync-nl2f8\" (UID: \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\") " pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544611 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv8ct\" (UniqueName: \"kubernetes.io/projected/a89f86a5-2e87-4e9d-9650-67cad17f80ef-kube-api-access-qv8ct\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544648 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmv5l\" (UniqueName: \"kubernetes.io/projected/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-kube-api-access-kmv5l\") pod \"barbican-db-sync-nl2f8\" (UID: \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\") " pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544665 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-ovsdbserver-sb\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544697 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-config\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544711 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-combined-ca-bundle\") pod \"barbican-db-sync-nl2f8\" (UID: \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\") " pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544738 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-ovsdbserver-nb\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544754 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-dns-svc\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.544782 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-dns-swift-storage-0\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.545669 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-dns-swift-storage-0\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.546589 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-ovsdbserver-nb\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.546632 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-config\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.547149 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-ovsdbserver-sb\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.548068 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-dns-svc\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.579961 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-db-sync-config-data\") pod \"barbican-db-sync-nl2f8\" (UID: \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\") " pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.580162 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49491217-3434-4f19-8d20-7582bb6a16c6-kube-api-access-pkgsp" (OuterVolumeSpecName: "kube-api-access-pkgsp") pod "49491217-3434-4f19-8d20-7582bb6a16c6" (UID: "49491217-3434-4f19-8d20-7582bb6a16c6"). InnerVolumeSpecName "kube-api-access-pkgsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.580326 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmv5l\" (UniqueName: \"kubernetes.io/projected/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-kube-api-access-kmv5l\") pod \"barbican-db-sync-nl2f8\" (UID: \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\") " pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.581597 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-combined-ca-bundle\") pod \"barbican-db-sync-nl2f8\" (UID: \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\") " pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.589575 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv8ct\" (UniqueName: \"kubernetes.io/projected/a89f86a5-2e87-4e9d-9650-67cad17f80ef-kube-api-access-qv8ct\") pod \"dnsmasq-dns-6486574f69-wqqdq\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.595835 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.610944 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "49491217-3434-4f19-8d20-7582bb6a16c6" (UID: "49491217-3434-4f19-8d20-7582bb6a16c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.617315 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.637726 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "49491217-3434-4f19-8d20-7582bb6a16c6" (UID: "49491217-3434-4f19-8d20-7582bb6a16c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.648963 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "49491217-3434-4f19-8d20-7582bb6a16c6" (UID: "49491217-3434-4f19-8d20-7582bb6a16c6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.649558 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-swift-storage-0\") pod \"49491217-3434-4f19-8d20-7582bb6a16c6\" (UID: \"49491217-3434-4f19-8d20-7582bb6a16c6\") " Oct 06 07:32:00 crc kubenswrapper[4769]: W1006 07:32:00.649669 4769 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/49491217-3434-4f19-8d20-7582bb6a16c6/volumes/kubernetes.io~configmap/dns-swift-storage-0 Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.649685 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "49491217-3434-4f19-8d20-7582bb6a16c6" (UID: "49491217-3434-4f19-8d20-7582bb6a16c6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.650406 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.650440 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.650451 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkgsp\" (UniqueName: \"kubernetes.io/projected/49491217-3434-4f19-8d20-7582bb6a16c6-kube-api-access-pkgsp\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.650461 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.667212 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-config" (OuterVolumeSpecName: "config") pod "49491217-3434-4f19-8d20-7582bb6a16c6" (UID: "49491217-3434-4f19-8d20-7582bb6a16c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.667611 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "49491217-3434-4f19-8d20-7582bb6a16c6" (UID: "49491217-3434-4f19-8d20-7582bb6a16c6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.675985 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.750952 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.751372 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/49491217-3434-4f19-8d20-7582bb6a16c6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:00 crc kubenswrapper[4769]: I1006 07:32:00.924953 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75d46dcdf-x65rz"] Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.121443 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-thnwc"] Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.182641 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l4gqp"] Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.236232 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qtgml"] Oct 06 07:32:01 crc kubenswrapper[4769]: W1006 07:32:01.254195 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dc18379_7117_430b_9d0f_65115eaedf51.slice/crio-f8d24c07784d2a0f65592d4b065969397c809a1b072c1883947b1e6c92337f08 WatchSource:0}: Error finding container f8d24c07784d2a0f65592d4b065969397c809a1b072c1883947b1e6c92337f08: Status 404 returned error can't find the container with id f8d24c07784d2a0f65592d4b065969397c809a1b072c1883947b1e6c92337f08 Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.373205 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qtgml" event={"ID":"1dc18379-7117-430b-9d0f-65115eaedf51","Type":"ContainerStarted","Data":"f8d24c07784d2a0f65592d4b065969397c809a1b072c1883947b1e6c92337f08"} Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.379736 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-thnwc" event={"ID":"7d89b00e-b803-4c22-b820-653e98f239b0","Type":"ContainerStarted","Data":"1ee8172a1bf388c3b5fb0d54e8840853dbe8ece8ce3a1b0aba2111a56802a776"} Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.384803 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" event={"ID":"49491217-3434-4f19-8d20-7582bb6a16c6","Type":"ContainerDied","Data":"c62eb9b618df87c6c92108546008b7a59fd2502a677a5d40c3028f6f2c2c3981"} Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.384859 4769 scope.go:117] "RemoveContainer" containerID="9463b2a173aeeb184ff71e3c26a548c6aeaeb9c664a90694bad8442a2fdbe0a2" Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.384998 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64fcb8f85-w8rj7" Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.398193 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" event={"ID":"f3770735-65bc-4dc8-9ff1-92988a3f8327","Type":"ContainerStarted","Data":"53aae9301305a51fb3c27a1d752eef66e8df5c777dcc03550b33767c99f174f7"} Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.398234 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" event={"ID":"f3770735-65bc-4dc8-9ff1-92988a3f8327","Type":"ContainerStarted","Data":"bac684f9a875746815e93d74f764bc1cd603c66d18716544c6538463c0acab18"} Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.402771 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l4gqp" event={"ID":"d21837ef-5a05-4a03-a065-0467ccfc5413","Type":"ContainerStarted","Data":"7017daab164c29ce888f734a562934c57220ca8a1a72437be4930b14c35fce2c"} Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.493859 4769 scope.go:117] "RemoveContainer" containerID="0c222336685823599bd76e1fc511871b92aee29581caa0c3d4846e88f83c1a46" Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.523599 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64fcb8f85-w8rj7"] Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.538819 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64fcb8f85-w8rj7"] Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.561401 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.593526 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2ctln"] Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.599182 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nl2f8"] Oct 06 07:32:01 crc kubenswrapper[4769]: I1006 07:32:01.612872 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486574f69-wqqdq"] Oct 06 07:32:01 crc kubenswrapper[4769]: W1006 07:32:01.638898 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b7fe511_77bf_4a50_b42a_3dee332f2a69.slice/crio-6def939f790feeff11d67a5152b2f053937e35e1f57b11db8fa66d960bf4705e WatchSource:0}: Error finding container 6def939f790feeff11d67a5152b2f053937e35e1f57b11db8fa66d960bf4705e: Status 404 returned error can't find the container with id 6def939f790feeff11d67a5152b2f053937e35e1f57b11db8fa66d960bf4705e Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.034943 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.038869 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mqvfk" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.113695 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlj5t\" (UniqueName: \"kubernetes.io/projected/586f85b0-c0bf-473c-9446-b9263c3ff95b-kube-api-access-rlj5t\") pod \"586f85b0-c0bf-473c-9446-b9263c3ff95b\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.113782 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-config\") pod \"f3770735-65bc-4dc8-9ff1-92988a3f8327\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.113823 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-combined-ca-bundle\") pod \"586f85b0-c0bf-473c-9446-b9263c3ff95b\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.113857 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfdsq\" (UniqueName: \"kubernetes.io/projected/f3770735-65bc-4dc8-9ff1-92988a3f8327-kube-api-access-lfdsq\") pod \"f3770735-65bc-4dc8-9ff1-92988a3f8327\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.113945 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-config-data\") pod \"586f85b0-c0bf-473c-9446-b9263c3ff95b\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.114063 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-ovsdbserver-sb\") pod \"f3770735-65bc-4dc8-9ff1-92988a3f8327\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.114093 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-dns-swift-storage-0\") pod \"f3770735-65bc-4dc8-9ff1-92988a3f8327\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.114140 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-dns-svc\") pod \"f3770735-65bc-4dc8-9ff1-92988a3f8327\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.114180 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-ovsdbserver-nb\") pod \"f3770735-65bc-4dc8-9ff1-92988a3f8327\" (UID: \"f3770735-65bc-4dc8-9ff1-92988a3f8327\") " Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.114231 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-db-sync-config-data\") pod \"586f85b0-c0bf-473c-9446-b9263c3ff95b\" (UID: \"586f85b0-c0bf-473c-9446-b9263c3ff95b\") " Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.122545 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/586f85b0-c0bf-473c-9446-b9263c3ff95b-kube-api-access-rlj5t" (OuterVolumeSpecName: "kube-api-access-rlj5t") pod "586f85b0-c0bf-473c-9446-b9263c3ff95b" (UID: "586f85b0-c0bf-473c-9446-b9263c3ff95b"). InnerVolumeSpecName "kube-api-access-rlj5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.126580 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3770735-65bc-4dc8-9ff1-92988a3f8327-kube-api-access-lfdsq" (OuterVolumeSpecName: "kube-api-access-lfdsq") pod "f3770735-65bc-4dc8-9ff1-92988a3f8327" (UID: "f3770735-65bc-4dc8-9ff1-92988a3f8327"). InnerVolumeSpecName "kube-api-access-lfdsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.126761 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "586f85b0-c0bf-473c-9446-b9263c3ff95b" (UID: "586f85b0-c0bf-473c-9446-b9263c3ff95b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.144101 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3770735-65bc-4dc8-9ff1-92988a3f8327" (UID: "f3770735-65bc-4dc8-9ff1-92988a3f8327"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.156016 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-config" (OuterVolumeSpecName: "config") pod "f3770735-65bc-4dc8-9ff1-92988a3f8327" (UID: "f3770735-65bc-4dc8-9ff1-92988a3f8327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.175527 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f3770735-65bc-4dc8-9ff1-92988a3f8327" (UID: "f3770735-65bc-4dc8-9ff1-92988a3f8327"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.193067 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "586f85b0-c0bf-473c-9446-b9263c3ff95b" (UID: "586f85b0-c0bf-473c-9446-b9263c3ff95b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.209052 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f3770735-65bc-4dc8-9ff1-92988a3f8327" (UID: "f3770735-65bc-4dc8-9ff1-92988a3f8327"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.210866 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49491217-3434-4f19-8d20-7582bb6a16c6" path="/var/lib/kubelet/pods/49491217-3434-4f19-8d20-7582bb6a16c6/volumes" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.213996 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3770735-65bc-4dc8-9ff1-92988a3f8327" (UID: "f3770735-65bc-4dc8-9ff1-92988a3f8327"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.216674 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlj5t\" (UniqueName: \"kubernetes.io/projected/586f85b0-c0bf-473c-9446-b9263c3ff95b-kube-api-access-rlj5t\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.216699 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.216709 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.216719 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfdsq\" (UniqueName: \"kubernetes.io/projected/f3770735-65bc-4dc8-9ff1-92988a3f8327-kube-api-access-lfdsq\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.216728 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.216738 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.216747 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.216755 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f3770735-65bc-4dc8-9ff1-92988a3f8327-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.216763 4769 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.252531 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-config-data" (OuterVolumeSpecName: "config-data") pod "586f85b0-c0bf-473c-9446-b9263c3ff95b" (UID: "586f85b0-c0bf-473c-9446-b9263c3ff95b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.319438 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/586f85b0-c0bf-473c-9446-b9263c3ff95b-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.424789 4769 generic.go:334] "Generic (PLEG): container finished" podID="a89f86a5-2e87-4e9d-9650-67cad17f80ef" containerID="d1e804badda368ddb067c5925ab50761e41940bdfc46d58bc94c757d1614dfa3" exitCode=0 Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.424880 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486574f69-wqqdq" event={"ID":"a89f86a5-2e87-4e9d-9650-67cad17f80ef","Type":"ContainerDied","Data":"d1e804badda368ddb067c5925ab50761e41940bdfc46d58bc94c757d1614dfa3"} Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.424909 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486574f69-wqqdq" event={"ID":"a89f86a5-2e87-4e9d-9650-67cad17f80ef","Type":"ContainerStarted","Data":"3e62a69a15afc1df32c1d954ed57bf4b636549155f4dabe76780f069c5863152"} Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.426797 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2ctln" event={"ID":"1b7fe511-77bf-4a50-b42a-3dee332f2a69","Type":"ContainerStarted","Data":"6def939f790feeff11d67a5152b2f053937e35e1f57b11db8fa66d960bf4705e"} Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.428705 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf","Type":"ContainerStarted","Data":"0db5db2255520557b58747ac605160f1a01dd863ad38393603b60c50ac29d405"} Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.447990 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-thnwc" event={"ID":"7d89b00e-b803-4c22-b820-653e98f239b0","Type":"ContainerStarted","Data":"71eaac04018cfb7703bfb9c2a0ba922686e6516ce46b112a11e28376db44f53f"} Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.451560 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mqvfk" event={"ID":"586f85b0-c0bf-473c-9446-b9263c3ff95b","Type":"ContainerDied","Data":"0f63cb1d59bad768cb0863d61ec21169013d84677c19ed438c71ed50025d422d"} Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.451619 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f63cb1d59bad768cb0863d61ec21169013d84677c19ed438c71ed50025d422d" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.451574 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mqvfk" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.472955 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nl2f8" event={"ID":"55fda312-faf1-4c7d-9b03-0aca23b7d5cb","Type":"ContainerStarted","Data":"ba0817f4f936e4202dfb3a212bf8aa3f2e475fbdfce62a28116e07342e8b8394"} Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.488438 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-thnwc" podStartSLOduration=3.48839137 podStartE2EDuration="3.48839137s" podCreationTimestamp="2025-10-06 07:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:02.480389991 +0000 UTC m=+919.004671138" watchObservedRunningTime="2025-10-06 07:32:02.48839137 +0000 UTC m=+919.012672537" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.489983 4769 generic.go:334] "Generic (PLEG): container finished" podID="f3770735-65bc-4dc8-9ff1-92988a3f8327" containerID="53aae9301305a51fb3c27a1d752eef66e8df5c777dcc03550b33767c99f174f7" exitCode=0 Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.490040 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" event={"ID":"f3770735-65bc-4dc8-9ff1-92988a3f8327","Type":"ContainerDied","Data":"53aae9301305a51fb3c27a1d752eef66e8df5c777dcc03550b33767c99f174f7"} Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.490068 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" event={"ID":"f3770735-65bc-4dc8-9ff1-92988a3f8327","Type":"ContainerDied","Data":"bac684f9a875746815e93d74f764bc1cd603c66d18716544c6538463c0acab18"} Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.490085 4769 scope.go:117] "RemoveContainer" containerID="53aae9301305a51fb3c27a1d752eef66e8df5c777dcc03550b33767c99f174f7" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.490167 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75d46dcdf-x65rz" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.499976 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l4gqp" event={"ID":"d21837ef-5a05-4a03-a065-0467ccfc5413","Type":"ContainerStarted","Data":"c598899dab4197ff1d8a4013dade77ab3f21d1890b6b25fb7e44c9f2858f4460"} Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.559515 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l4gqp" podStartSLOduration=3.559499223 podStartE2EDuration="3.559499223s" podCreationTimestamp="2025-10-06 07:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:02.530666325 +0000 UTC m=+919.054947492" watchObservedRunningTime="2025-10-06 07:32:02.559499223 +0000 UTC m=+919.083780370" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.584215 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75d46dcdf-x65rz"] Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.602880 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75d46dcdf-x65rz"] Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.638165 4769 scope.go:117] "RemoveContainer" containerID="53aae9301305a51fb3c27a1d752eef66e8df5c777dcc03550b33767c99f174f7" Oct 06 07:32:02 crc kubenswrapper[4769]: E1006 07:32:02.642665 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53aae9301305a51fb3c27a1d752eef66e8df5c777dcc03550b33767c99f174f7\": container with ID starting with 53aae9301305a51fb3c27a1d752eef66e8df5c777dcc03550b33767c99f174f7 not found: ID does not exist" containerID="53aae9301305a51fb3c27a1d752eef66e8df5c777dcc03550b33767c99f174f7" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.642712 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53aae9301305a51fb3c27a1d752eef66e8df5c777dcc03550b33767c99f174f7"} err="failed to get container status \"53aae9301305a51fb3c27a1d752eef66e8df5c777dcc03550b33767c99f174f7\": rpc error: code = NotFound desc = could not find container \"53aae9301305a51fb3c27a1d752eef66e8df5c777dcc03550b33767c99f174f7\": container with ID starting with 53aae9301305a51fb3c27a1d752eef66e8df5c777dcc03550b33767c99f174f7 not found: ID does not exist" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.729635 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486574f69-wqqdq"] Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.842480 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d7577745f-sphwf"] Oct 06 07:32:02 crc kubenswrapper[4769]: E1006 07:32:02.842873 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3770735-65bc-4dc8-9ff1-92988a3f8327" containerName="init" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.842889 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3770735-65bc-4dc8-9ff1-92988a3f8327" containerName="init" Oct 06 07:32:02 crc kubenswrapper[4769]: E1006 07:32:02.842900 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="586f85b0-c0bf-473c-9446-b9263c3ff95b" containerName="glance-db-sync" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.842906 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="586f85b0-c0bf-473c-9446-b9263c3ff95b" containerName="glance-db-sync" Oct 06 07:32:02 crc kubenswrapper[4769]: E1006 07:32:02.842921 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49491217-3434-4f19-8d20-7582bb6a16c6" containerName="init" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.842927 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="49491217-3434-4f19-8d20-7582bb6a16c6" containerName="init" Oct 06 07:32:02 crc kubenswrapper[4769]: E1006 07:32:02.842961 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49491217-3434-4f19-8d20-7582bb6a16c6" containerName="dnsmasq-dns" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.842967 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="49491217-3434-4f19-8d20-7582bb6a16c6" containerName="dnsmasq-dns" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.843122 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="586f85b0-c0bf-473c-9446-b9263c3ff95b" containerName="glance-db-sync" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.843134 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3770735-65bc-4dc8-9ff1-92988a3f8327" containerName="init" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.843142 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="49491217-3434-4f19-8d20-7582bb6a16c6" containerName="dnsmasq-dns" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.844018 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.864884 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d7577745f-sphwf"] Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.952958 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmgg7\" (UniqueName: \"kubernetes.io/projected/4d13b502-7f7e-447b-a4bc-008681b34ee0-kube-api-access-fmgg7\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.953049 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-dns-svc\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.953068 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-config\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.953089 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-dns-swift-storage-0\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.953104 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-ovsdbserver-nb\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:02 crc kubenswrapper[4769]: I1006 07:32:02.953145 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-ovsdbserver-sb\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.054117 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmgg7\" (UniqueName: \"kubernetes.io/projected/4d13b502-7f7e-447b-a4bc-008681b34ee0-kube-api-access-fmgg7\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.054208 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-dns-svc\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.054227 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-config\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.054244 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-dns-swift-storage-0\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.054260 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-ovsdbserver-nb\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.054296 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-ovsdbserver-sb\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.055116 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-ovsdbserver-sb\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.055120 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-config\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.055677 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-dns-svc\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.059022 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-dns-swift-storage-0\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.059268 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-ovsdbserver-nb\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.079316 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmgg7\" (UniqueName: \"kubernetes.io/projected/4d13b502-7f7e-447b-a4bc-008681b34ee0-kube-api-access-fmgg7\") pod \"dnsmasq-dns-7d7577745f-sphwf\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.196867 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.388231 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.409749 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.431009 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.442247 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.442336 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.442251 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jlxmx" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.466495 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.475957 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.478448 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.482513 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.493086 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.537652 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-scripts\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.537711 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.537750 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpw5m\" (UniqueName: \"kubernetes.io/projected/24129778-5491-4376-949d-5a4f7344b60a-kube-api-access-dpw5m\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.537782 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.537829 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24129778-5491-4376-949d-5a4f7344b60a-logs\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.537853 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.537891 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.543548 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-config-data\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.543632 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fe97d92-0aaa-4558-9cd2-892355c83d5e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.543719 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.543754 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24129778-5491-4376-949d-5a4f7344b60a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.543829 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.543885 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fe97d92-0aaa-4558-9cd2-892355c83d5e-logs\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.543908 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6n4w\" (UniqueName: \"kubernetes.io/projected/4fe97d92-0aaa-4558-9cd2-892355c83d5e-kube-api-access-g6n4w\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.545540 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486574f69-wqqdq" event={"ID":"a89f86a5-2e87-4e9d-9650-67cad17f80ef","Type":"ContainerStarted","Data":"8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab"} Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.549770 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6486574f69-wqqdq" podUID="a89f86a5-2e87-4e9d-9650-67cad17f80ef" containerName="dnsmasq-dns" containerID="cri-o://8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab" gracePeriod=10 Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.550159 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.587253 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6486574f69-wqqdq" podStartSLOduration=3.587234638 podStartE2EDuration="3.587234638s" podCreationTimestamp="2025-10-06 07:32:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:03.573172904 +0000 UTC m=+920.097454051" watchObservedRunningTime="2025-10-06 07:32:03.587234638 +0000 UTC m=+920.111515785" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645470 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645533 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24129778-5491-4376-949d-5a4f7344b60a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645591 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645639 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fe97d92-0aaa-4558-9cd2-892355c83d5e-logs\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645665 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6n4w\" (UniqueName: \"kubernetes.io/projected/4fe97d92-0aaa-4558-9cd2-892355c83d5e-kube-api-access-g6n4w\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645717 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-scripts\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645743 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645789 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpw5m\" (UniqueName: \"kubernetes.io/projected/24129778-5491-4376-949d-5a4f7344b60a-kube-api-access-dpw5m\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645836 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645905 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24129778-5491-4376-949d-5a4f7344b60a-logs\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645928 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645962 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.645997 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-config-data\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.646056 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fe97d92-0aaa-4558-9cd2-892355c83d5e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.646619 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fe97d92-0aaa-4558-9cd2-892355c83d5e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.647467 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.649153 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24129778-5491-4376-949d-5a4f7344b60a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.651543 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.651788 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24129778-5491-4376-949d-5a4f7344b60a-logs\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.660333 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.651574 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fe97d92-0aaa-4558-9cd2-892355c83d5e-logs\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.669889 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-config-data\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.670769 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.678018 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-scripts\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.680015 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.685887 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.686551 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpw5m\" (UniqueName: \"kubernetes.io/projected/24129778-5491-4376-949d-5a4f7344b60a-kube-api-access-dpw5m\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.691497 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6n4w\" (UniqueName: \"kubernetes.io/projected/4fe97d92-0aaa-4558-9cd2-892355c83d5e-kube-api-access-g6n4w\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.731663 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.767847 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.768717 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.825525 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 07:32:03 crc kubenswrapper[4769]: I1006 07:32:03.948474 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d7577745f-sphwf"] Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.196766 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.202723 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3770735-65bc-4dc8-9ff1-92988a3f8327" path="/var/lib/kubelet/pods/f3770735-65bc-4dc8-9ff1-92988a3f8327/volumes" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.258275 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-dns-swift-storage-0\") pod \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.258316 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-dns-svc\") pod \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.258442 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-config\") pod \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.258477 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-ovsdbserver-nb\") pod \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.258589 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv8ct\" (UniqueName: \"kubernetes.io/projected/a89f86a5-2e87-4e9d-9650-67cad17f80ef-kube-api-access-qv8ct\") pod \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.258624 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-ovsdbserver-sb\") pod \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\" (UID: \"a89f86a5-2e87-4e9d-9650-67cad17f80ef\") " Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.289254 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a89f86a5-2e87-4e9d-9650-67cad17f80ef-kube-api-access-qv8ct" (OuterVolumeSpecName: "kube-api-access-qv8ct") pod "a89f86a5-2e87-4e9d-9650-67cad17f80ef" (UID: "a89f86a5-2e87-4e9d-9650-67cad17f80ef"). InnerVolumeSpecName "kube-api-access-qv8ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.345170 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a89f86a5-2e87-4e9d-9650-67cad17f80ef" (UID: "a89f86a5-2e87-4e9d-9650-67cad17f80ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.365752 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv8ct\" (UniqueName: \"kubernetes.io/projected/a89f86a5-2e87-4e9d-9650-67cad17f80ef-kube-api-access-qv8ct\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.365782 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.377264 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a89f86a5-2e87-4e9d-9650-67cad17f80ef" (UID: "a89f86a5-2e87-4e9d-9650-67cad17f80ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.463049 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a89f86a5-2e87-4e9d-9650-67cad17f80ef" (UID: "a89f86a5-2e87-4e9d-9650-67cad17f80ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.463802 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a89f86a5-2e87-4e9d-9650-67cad17f80ef" (UID: "a89f86a5-2e87-4e9d-9650-67cad17f80ef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.468098 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-config" (OuterVolumeSpecName: "config") pod "a89f86a5-2e87-4e9d-9650-67cad17f80ef" (UID: "a89f86a5-2e87-4e9d-9650-67cad17f80ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.470350 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.470384 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.470397 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.470406 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a89f86a5-2e87-4e9d-9650-67cad17f80ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.582264 4769 generic.go:334] "Generic (PLEG): container finished" podID="a89f86a5-2e87-4e9d-9650-67cad17f80ef" containerID="8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab" exitCode=0 Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.582331 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486574f69-wqqdq" event={"ID":"a89f86a5-2e87-4e9d-9650-67cad17f80ef","Type":"ContainerDied","Data":"8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab"} Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.582355 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486574f69-wqqdq" event={"ID":"a89f86a5-2e87-4e9d-9650-67cad17f80ef","Type":"ContainerDied","Data":"3e62a69a15afc1df32c1d954ed57bf4b636549155f4dabe76780f069c5863152"} Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.582373 4769 scope.go:117] "RemoveContainer" containerID="8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.582541 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486574f69-wqqdq" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.583987 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.585243 4769 generic.go:334] "Generic (PLEG): container finished" podID="4d13b502-7f7e-447b-a4bc-008681b34ee0" containerID="826e9885432bee17e529c9ee89002c393bf1ed59cb34d35521f463e10d5ef2ab" exitCode=0 Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.585277 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" event={"ID":"4d13b502-7f7e-447b-a4bc-008681b34ee0","Type":"ContainerDied","Data":"826e9885432bee17e529c9ee89002c393bf1ed59cb34d35521f463e10d5ef2ab"} Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.585302 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" event={"ID":"4d13b502-7f7e-447b-a4bc-008681b34ee0","Type":"ContainerStarted","Data":"af85802338dc8d04870bbed285bba3498831e879cfb8557f07e88e89fcfd9f30"} Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.636598 4769 scope.go:117] "RemoveContainer" containerID="d1e804badda368ddb067c5925ab50761e41940bdfc46d58bc94c757d1614dfa3" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.679691 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486574f69-wqqdq"] Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.686275 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6486574f69-wqqdq"] Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.752670 4769 scope.go:117] "RemoveContainer" containerID="8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab" Oct 06 07:32:04 crc kubenswrapper[4769]: E1006 07:32:04.752985 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab\": container with ID starting with 8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab not found: ID does not exist" containerID="8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.753029 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab"} err="failed to get container status \"8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab\": rpc error: code = NotFound desc = could not find container \"8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab\": container with ID starting with 8d4c0478c74f8f81e07ae6bb19d3ff7dd70391ccd5abc6cee760655387290dab not found: ID does not exist" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.753058 4769 scope.go:117] "RemoveContainer" containerID="d1e804badda368ddb067c5925ab50761e41940bdfc46d58bc94c757d1614dfa3" Oct 06 07:32:04 crc kubenswrapper[4769]: E1006 07:32:04.753293 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1e804badda368ddb067c5925ab50761e41940bdfc46d58bc94c757d1614dfa3\": container with ID starting with d1e804badda368ddb067c5925ab50761e41940bdfc46d58bc94c757d1614dfa3 not found: ID does not exist" containerID="d1e804badda368ddb067c5925ab50761e41940bdfc46d58bc94c757d1614dfa3" Oct 06 07:32:04 crc kubenswrapper[4769]: I1006 07:32:04.753313 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1e804badda368ddb067c5925ab50761e41940bdfc46d58bc94c757d1614dfa3"} err="failed to get container status \"d1e804badda368ddb067c5925ab50761e41940bdfc46d58bc94c757d1614dfa3\": rpc error: code = NotFound desc = could not find container \"d1e804badda368ddb067c5925ab50761e41940bdfc46d58bc94c757d1614dfa3\": container with ID starting with d1e804badda368ddb067c5925ab50761e41940bdfc46d58bc94c757d1614dfa3 not found: ID does not exist" Oct 06 07:32:05 crc kubenswrapper[4769]: I1006 07:32:05.491191 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:32:05 crc kubenswrapper[4769]: I1006 07:32:05.625888 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" event={"ID":"4d13b502-7f7e-447b-a4bc-008681b34ee0","Type":"ContainerStarted","Data":"4c81c0b1f8f70b17392b85053686fc53a02b844b623856fd48bd917147373d34"} Oct 06 07:32:05 crc kubenswrapper[4769]: I1006 07:32:05.626170 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:05 crc kubenswrapper[4769]: I1006 07:32:05.641821 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24129778-5491-4376-949d-5a4f7344b60a","Type":"ContainerStarted","Data":"029b4fdb5cffac756548058fd667cadae805bd6179f5ce8da3fff646fccc034f"} Oct 06 07:32:05 crc kubenswrapper[4769]: I1006 07:32:05.655133 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" podStartSLOduration=3.655112688 podStartE2EDuration="3.655112688s" podCreationTimestamp="2025-10-06 07:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:05.648834076 +0000 UTC m=+922.173115223" watchObservedRunningTime="2025-10-06 07:32:05.655112688 +0000 UTC m=+922.179393835" Oct 06 07:32:06 crc kubenswrapper[4769]: I1006 07:32:06.179070 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a89f86a5-2e87-4e9d-9650-67cad17f80ef" path="/var/lib/kubelet/pods/a89f86a5-2e87-4e9d-9650-67cad17f80ef/volumes" Oct 06 07:32:06 crc kubenswrapper[4769]: I1006 07:32:06.655299 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24129778-5491-4376-949d-5a4f7344b60a","Type":"ContainerStarted","Data":"17c0f4ca2d17a3a7d2eb8079fa66398b280f5b54f45045da1a861c223a917018"} Oct 06 07:32:07 crc kubenswrapper[4769]: I1006 07:32:07.689161 4769 generic.go:334] "Generic (PLEG): container finished" podID="d21837ef-5a05-4a03-a065-0467ccfc5413" containerID="c598899dab4197ff1d8a4013dade77ab3f21d1890b6b25fb7e44c9f2858f4460" exitCode=0 Oct 06 07:32:07 crc kubenswrapper[4769]: I1006 07:32:07.689558 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l4gqp" event={"ID":"d21837ef-5a05-4a03-a065-0467ccfc5413","Type":"ContainerDied","Data":"c598899dab4197ff1d8a4013dade77ab3f21d1890b6b25fb7e44c9f2858f4460"} Oct 06 07:32:07 crc kubenswrapper[4769]: I1006 07:32:07.690993 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4fe97d92-0aaa-4558-9cd2-892355c83d5e","Type":"ContainerStarted","Data":"3488fa1678210147ef0eb186f5a868fd949b5f407148939f06310b8f4a126e8b"} Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.115768 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.220176 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-fernet-keys\") pod \"d21837ef-5a05-4a03-a065-0467ccfc5413\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.220326 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-credential-keys\") pod \"d21837ef-5a05-4a03-a065-0467ccfc5413\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.220386 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcbmg\" (UniqueName: \"kubernetes.io/projected/d21837ef-5a05-4a03-a065-0467ccfc5413-kube-api-access-dcbmg\") pod \"d21837ef-5a05-4a03-a065-0467ccfc5413\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.220416 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-scripts\") pod \"d21837ef-5a05-4a03-a065-0467ccfc5413\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.220486 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-combined-ca-bundle\") pod \"d21837ef-5a05-4a03-a065-0467ccfc5413\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.220543 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-config-data\") pod \"d21837ef-5a05-4a03-a065-0467ccfc5413\" (UID: \"d21837ef-5a05-4a03-a065-0467ccfc5413\") " Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.235510 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d21837ef-5a05-4a03-a065-0467ccfc5413" (UID: "d21837ef-5a05-4a03-a065-0467ccfc5413"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.244962 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d21837ef-5a05-4a03-a065-0467ccfc5413-kube-api-access-dcbmg" (OuterVolumeSpecName: "kube-api-access-dcbmg") pod "d21837ef-5a05-4a03-a065-0467ccfc5413" (UID: "d21837ef-5a05-4a03-a065-0467ccfc5413"). InnerVolumeSpecName "kube-api-access-dcbmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.246647 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d21837ef-5a05-4a03-a065-0467ccfc5413" (UID: "d21837ef-5a05-4a03-a065-0467ccfc5413"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.284211 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-config-data" (OuterVolumeSpecName: "config-data") pod "d21837ef-5a05-4a03-a065-0467ccfc5413" (UID: "d21837ef-5a05-4a03-a065-0467ccfc5413"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.285570 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d21837ef-5a05-4a03-a065-0467ccfc5413" (UID: "d21837ef-5a05-4a03-a065-0467ccfc5413"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.287578 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-scripts" (OuterVolumeSpecName: "scripts") pod "d21837ef-5a05-4a03-a065-0467ccfc5413" (UID: "d21837ef-5a05-4a03-a065-0467ccfc5413"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.325380 4769 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.325436 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcbmg\" (UniqueName: \"kubernetes.io/projected/d21837ef-5a05-4a03-a065-0467ccfc5413-kube-api-access-dcbmg\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.325448 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.325458 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.325466 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.325474 4769 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d21837ef-5a05-4a03-a065-0467ccfc5413-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.716074 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.730040 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l4gqp" event={"ID":"d21837ef-5a05-4a03-a065-0467ccfc5413","Type":"ContainerDied","Data":"7017daab164c29ce888f734a562934c57220ca8a1a72437be4930b14c35fce2c"} Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.730078 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7017daab164c29ce888f734a562934c57220ca8a1a72437be4930b14c35fce2c" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.730141 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l4gqp" Oct 06 07:32:10 crc kubenswrapper[4769]: I1006 07:32:10.779265 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.232339 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l4gqp"] Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.243197 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l4gqp"] Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.292693 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-ll86w"] Oct 06 07:32:11 crc kubenswrapper[4769]: E1006 07:32:11.293151 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a89f86a5-2e87-4e9d-9650-67cad17f80ef" containerName="dnsmasq-dns" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.293174 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a89f86a5-2e87-4e9d-9650-67cad17f80ef" containerName="dnsmasq-dns" Oct 06 07:32:11 crc kubenswrapper[4769]: E1006 07:32:11.293216 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21837ef-5a05-4a03-a065-0467ccfc5413" containerName="keystone-bootstrap" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.293224 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21837ef-5a05-4a03-a065-0467ccfc5413" containerName="keystone-bootstrap" Oct 06 07:32:11 crc kubenswrapper[4769]: E1006 07:32:11.293237 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a89f86a5-2e87-4e9d-9650-67cad17f80ef" containerName="init" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.293245 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a89f86a5-2e87-4e9d-9650-67cad17f80ef" containerName="init" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.293476 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a89f86a5-2e87-4e9d-9650-67cad17f80ef" containerName="dnsmasq-dns" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.293499 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d21837ef-5a05-4a03-a065-0467ccfc5413" containerName="keystone-bootstrap" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.294264 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.297804 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.297886 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.297914 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wgm6x" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.299094 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.310885 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ll86w"] Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.343503 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-config-data\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.343555 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-fernet-keys\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.343601 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-credential-keys\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.343621 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-scripts\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.343667 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-combined-ca-bundle\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.343717 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9rl6\" (UniqueName: \"kubernetes.io/projected/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-kube-api-access-b9rl6\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.448948 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9rl6\" (UniqueName: \"kubernetes.io/projected/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-kube-api-access-b9rl6\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.449110 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-config-data\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.449139 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-fernet-keys\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.449240 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-credential-keys\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.449280 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-scripts\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.449346 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-combined-ca-bundle\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.455129 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-scripts\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.456388 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-fernet-keys\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.457119 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-combined-ca-bundle\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.458162 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-credential-keys\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.459195 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-config-data\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.465935 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9rl6\" (UniqueName: \"kubernetes.io/projected/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-kube-api-access-b9rl6\") pod \"keystone-bootstrap-ll86w\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:11 crc kubenswrapper[4769]: I1006 07:32:11.613771 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:12 crc kubenswrapper[4769]: I1006 07:32:12.174620 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d21837ef-5a05-4a03-a065-0467ccfc5413" path="/var/lib/kubelet/pods/d21837ef-5a05-4a03-a065-0467ccfc5413/volumes" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.198666 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.247691 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-ll86w"] Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.255473 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c95b97d95-bbkf5"] Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.255715 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" podUID="7f947e31-bd5d-4571-a79f-85805377370c" containerName="dnsmasq-dns" containerID="cri-o://89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500" gracePeriod=10 Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.675844 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.768982 4769 generic.go:334] "Generic (PLEG): container finished" podID="7f947e31-bd5d-4571-a79f-85805377370c" containerID="89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500" exitCode=0 Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.769067 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" event={"ID":"7f947e31-bd5d-4571-a79f-85805377370c","Type":"ContainerDied","Data":"89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500"} Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.769092 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" event={"ID":"7f947e31-bd5d-4571-a79f-85805377370c","Type":"ContainerDied","Data":"f7d11daa23d21e80789e9006a0a5892d9f16f3c7a0f9f0264fb081b219586cab"} Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.769109 4769 scope.go:117] "RemoveContainer" containerID="89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.769271 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c95b97d95-bbkf5" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.783684 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4fe97d92-0aaa-4558-9cd2-892355c83d5e","Type":"ContainerStarted","Data":"b1c69624f8a28e7bcbbc068ea3d942e3a4e9dd2240ed49463facc7ad480bb2c1"} Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.786088 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ll86w" event={"ID":"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a","Type":"ContainerStarted","Data":"73fbbe7109399078438588082acac8981570a0e79d33bdf28bfce22f9f79aefd"} Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.786127 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ll86w" event={"ID":"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a","Type":"ContainerStarted","Data":"e66a8d4b5a668bd28faf23a9ac2dc1651efedf1cf7ce3cab0ff675eeb7787bce"} Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.791053 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2ctln" event={"ID":"1b7fe511-77bf-4a50-b42a-3dee332f2a69","Type":"ContainerStarted","Data":"17de8ff848c3abf1fecac3fe5324e71097eb01eec0bc31a688c2da873fc16fb3"} Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.798009 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-ovsdbserver-nb\") pod \"7f947e31-bd5d-4571-a79f-85805377370c\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.798051 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgt24\" (UniqueName: \"kubernetes.io/projected/7f947e31-bd5d-4571-a79f-85805377370c-kube-api-access-kgt24\") pod \"7f947e31-bd5d-4571-a79f-85805377370c\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.798147 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-dns-svc\") pod \"7f947e31-bd5d-4571-a79f-85805377370c\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.798210 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-config\") pod \"7f947e31-bd5d-4571-a79f-85805377370c\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.798233 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-ovsdbserver-sb\") pod \"7f947e31-bd5d-4571-a79f-85805377370c\" (UID: \"7f947e31-bd5d-4571-a79f-85805377370c\") " Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.811519 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24129778-5491-4376-949d-5a4f7344b60a","Type":"ContainerStarted","Data":"3db2ec824a7658983cb2407531c3fb79ec06373971f4a53b76fb5c3574c717ec"} Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.811700 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="24129778-5491-4376-949d-5a4f7344b60a" containerName="glance-log" containerID="cri-o://17c0f4ca2d17a3a7d2eb8079fa66398b280f5b54f45045da1a861c223a917018" gracePeriod=30 Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.811824 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="24129778-5491-4376-949d-5a4f7344b60a" containerName="glance-httpd" containerID="cri-o://3db2ec824a7658983cb2407531c3fb79ec06373971f4a53b76fb5c3574c717ec" gracePeriod=30 Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.820750 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f947e31-bd5d-4571-a79f-85805377370c-kube-api-access-kgt24" (OuterVolumeSpecName: "kube-api-access-kgt24") pod "7f947e31-bd5d-4571-a79f-85805377370c" (UID: "7f947e31-bd5d-4571-a79f-85805377370c"). InnerVolumeSpecName "kube-api-access-kgt24". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.825686 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nl2f8" event={"ID":"55fda312-faf1-4c7d-9b03-0aca23b7d5cb","Type":"ContainerStarted","Data":"7531f4e7ec0d07dd5d70e987055e7039694b6866c08666adbce1ad5ade79dc7d"} Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.829571 4769 scope.go:117] "RemoveContainer" containerID="6ec917a318228be89b91f7dc5dd5e5320f5d078072b875382572a0baed4c61e8" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.830157 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2ctln" podStartSLOduration=2.724845461 podStartE2EDuration="13.830137338s" podCreationTimestamp="2025-10-06 07:32:00 +0000 UTC" firstStartedPulling="2025-10-06 07:32:01.652491567 +0000 UTC m=+918.176772714" lastFinishedPulling="2025-10-06 07:32:12.757783424 +0000 UTC m=+929.282064591" observedRunningTime="2025-10-06 07:32:13.81737149 +0000 UTC m=+930.341652637" watchObservedRunningTime="2025-10-06 07:32:13.830137338 +0000 UTC m=+930.354418485" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.832523 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf","Type":"ContainerStarted","Data":"676ce3c05b5a0c3694f26e6491299a5f584a906c4661e6cb54453732db741a8c"} Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.853534 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.852783947 podStartE2EDuration="10.852783947s" podCreationTimestamp="2025-10-06 07:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:13.848757458 +0000 UTC m=+930.373038595" watchObservedRunningTime="2025-10-06 07:32:13.852783947 +0000 UTC m=+930.377065094" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.875370 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f947e31-bd5d-4571-a79f-85805377370c" (UID: "7f947e31-bd5d-4571-a79f-85805377370c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.876755 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-nl2f8" podStartSLOduration=2.647188029 podStartE2EDuration="13.876742532s" podCreationTimestamp="2025-10-06 07:32:00 +0000 UTC" firstStartedPulling="2025-10-06 07:32:01.612051952 +0000 UTC m=+918.136333099" lastFinishedPulling="2025-10-06 07:32:12.841606455 +0000 UTC m=+929.365887602" observedRunningTime="2025-10-06 07:32:13.875697643 +0000 UTC m=+930.399978790" watchObservedRunningTime="2025-10-06 07:32:13.876742532 +0000 UTC m=+930.401023679" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.898561 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-config" (OuterVolumeSpecName: "config") pod "7f947e31-bd5d-4571-a79f-85805377370c" (UID: "7f947e31-bd5d-4571-a79f-85805377370c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.905800 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.905829 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.905842 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgt24\" (UniqueName: \"kubernetes.io/projected/7f947e31-bd5d-4571-a79f-85805377370c-kube-api-access-kgt24\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.909071 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f947e31-bd5d-4571-a79f-85805377370c" (UID: "7f947e31-bd5d-4571-a79f-85805377370c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:13 crc kubenswrapper[4769]: I1006 07:32:13.920251 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f947e31-bd5d-4571-a79f-85805377370c" (UID: "7f947e31-bd5d-4571-a79f-85805377370c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:13.999902 4769 scope.go:117] "RemoveContainer" containerID="89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500" Oct 06 07:32:14 crc kubenswrapper[4769]: E1006 07:32:14.000497 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500\": container with ID starting with 89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500 not found: ID does not exist" containerID="89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.000535 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500"} err="failed to get container status \"89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500\": rpc error: code = NotFound desc = could not find container \"89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500\": container with ID starting with 89732a7e585eb5d79fe2f2c417961e0cd5d3c28d7c1ebf55ddef8b0aada16500 not found: ID does not exist" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.000568 4769 scope.go:117] "RemoveContainer" containerID="6ec917a318228be89b91f7dc5dd5e5320f5d078072b875382572a0baed4c61e8" Oct 06 07:32:14 crc kubenswrapper[4769]: E1006 07:32:14.000809 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec917a318228be89b91f7dc5dd5e5320f5d078072b875382572a0baed4c61e8\": container with ID starting with 6ec917a318228be89b91f7dc5dd5e5320f5d078072b875382572a0baed4c61e8 not found: ID does not exist" containerID="6ec917a318228be89b91f7dc5dd5e5320f5d078072b875382572a0baed4c61e8" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.000842 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec917a318228be89b91f7dc5dd5e5320f5d078072b875382572a0baed4c61e8"} err="failed to get container status \"6ec917a318228be89b91f7dc5dd5e5320f5d078072b875382572a0baed4c61e8\": rpc error: code = NotFound desc = could not find container \"6ec917a318228be89b91f7dc5dd5e5320f5d078072b875382572a0baed4c61e8\": container with ID starting with 6ec917a318228be89b91f7dc5dd5e5320f5d078072b875382572a0baed4c61e8 not found: ID does not exist" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.007881 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.007924 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f947e31-bd5d-4571-a79f-85805377370c-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.123911 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c95b97d95-bbkf5"] Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.130438 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c95b97d95-bbkf5"] Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.177236 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f947e31-bd5d-4571-a79f-85805377370c" path="/var/lib/kubelet/pods/7f947e31-bd5d-4571-a79f-85805377370c/volumes" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.850373 4769 generic.go:334] "Generic (PLEG): container finished" podID="24129778-5491-4376-949d-5a4f7344b60a" containerID="3db2ec824a7658983cb2407531c3fb79ec06373971f4a53b76fb5c3574c717ec" exitCode=143 Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.850923 4769 generic.go:334] "Generic (PLEG): container finished" podID="24129778-5491-4376-949d-5a4f7344b60a" containerID="17c0f4ca2d17a3a7d2eb8079fa66398b280f5b54f45045da1a861c223a917018" exitCode=143 Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.850917 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24129778-5491-4376-949d-5a4f7344b60a","Type":"ContainerDied","Data":"3db2ec824a7658983cb2407531c3fb79ec06373971f4a53b76fb5c3574c717ec"} Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.850983 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24129778-5491-4376-949d-5a4f7344b60a","Type":"ContainerDied","Data":"17c0f4ca2d17a3a7d2eb8079fa66398b280f5b54f45045da1a861c223a917018"} Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.851000 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"24129778-5491-4376-949d-5a4f7344b60a","Type":"ContainerDied","Data":"029b4fdb5cffac756548058fd667cadae805bd6179f5ce8da3fff646fccc034f"} Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.851014 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="029b4fdb5cffac756548058fd667cadae805bd6179f5ce8da3fff646fccc034f" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.857762 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf","Type":"ContainerStarted","Data":"47bcd566d240d501ca78f81c29c18b2c7d752a10a9d750c03f357a9f6ba11b20"} Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.866758 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4fe97d92-0aaa-4558-9cd2-892355c83d5e","Type":"ContainerStarted","Data":"635e31c0a7303277dad2663ebcc13d02f7d7ccf35b3d231fb2f29c3f04f8fe30"} Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.867049 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4fe97d92-0aaa-4558-9cd2-892355c83d5e" containerName="glance-log" containerID="cri-o://b1c69624f8a28e7bcbbc068ea3d942e3a4e9dd2240ed49463facc7ad480bb2c1" gracePeriod=30 Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.867270 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4fe97d92-0aaa-4558-9cd2-892355c83d5e" containerName="glance-httpd" containerID="cri-o://635e31c0a7303277dad2663ebcc13d02f7d7ccf35b3d231fb2f29c3f04f8fe30" gracePeriod=30 Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.890278 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-ll86w" podStartSLOduration=3.890260879 podStartE2EDuration="3.890260879s" podCreationTimestamp="2025-10-06 07:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:14.884971715 +0000 UTC m=+931.409252872" watchObservedRunningTime="2025-10-06 07:32:14.890260879 +0000 UTC m=+931.414542026" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.890833 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.912509 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.912485186 podStartE2EDuration="11.912485186s" podCreationTimestamp="2025-10-06 07:32:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:14.908352343 +0000 UTC m=+931.432633490" watchObservedRunningTime="2025-10-06 07:32:14.912485186 +0000 UTC m=+931.436766333" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.923979 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpw5m\" (UniqueName: \"kubernetes.io/projected/24129778-5491-4376-949d-5a4f7344b60a-kube-api-access-dpw5m\") pod \"24129778-5491-4376-949d-5a4f7344b60a\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.924072 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-scripts\") pod \"24129778-5491-4376-949d-5a4f7344b60a\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.924125 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-combined-ca-bundle\") pod \"24129778-5491-4376-949d-5a4f7344b60a\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.924192 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24129778-5491-4376-949d-5a4f7344b60a-httpd-run\") pod \"24129778-5491-4376-949d-5a4f7344b60a\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.924232 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-config-data\") pod \"24129778-5491-4376-949d-5a4f7344b60a\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.924255 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24129778-5491-4376-949d-5a4f7344b60a-logs\") pod \"24129778-5491-4376-949d-5a4f7344b60a\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.924285 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"24129778-5491-4376-949d-5a4f7344b60a\" (UID: \"24129778-5491-4376-949d-5a4f7344b60a\") " Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.924915 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24129778-5491-4376-949d-5a4f7344b60a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "24129778-5491-4376-949d-5a4f7344b60a" (UID: "24129778-5491-4376-949d-5a4f7344b60a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.926281 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24129778-5491-4376-949d-5a4f7344b60a-logs" (OuterVolumeSpecName: "logs") pod "24129778-5491-4376-949d-5a4f7344b60a" (UID: "24129778-5491-4376-949d-5a4f7344b60a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.933726 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "24129778-5491-4376-949d-5a4f7344b60a" (UID: "24129778-5491-4376-949d-5a4f7344b60a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.935600 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24129778-5491-4376-949d-5a4f7344b60a-kube-api-access-dpw5m" (OuterVolumeSpecName: "kube-api-access-dpw5m") pod "24129778-5491-4376-949d-5a4f7344b60a" (UID: "24129778-5491-4376-949d-5a4f7344b60a"). InnerVolumeSpecName "kube-api-access-dpw5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.962859 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-scripts" (OuterVolumeSpecName: "scripts") pod "24129778-5491-4376-949d-5a4f7344b60a" (UID: "24129778-5491-4376-949d-5a4f7344b60a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.966879 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24129778-5491-4376-949d-5a4f7344b60a" (UID: "24129778-5491-4376-949d-5a4f7344b60a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:14 crc kubenswrapper[4769]: I1006 07:32:14.995884 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-config-data" (OuterVolumeSpecName: "config-data") pod "24129778-5491-4376-949d-5a4f7344b60a" (UID: "24129778-5491-4376-949d-5a4f7344b60a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.026143 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/24129778-5491-4376-949d-5a4f7344b60a-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.026178 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.026188 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24129778-5491-4376-949d-5a4f7344b60a-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.026221 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.026232 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpw5m\" (UniqueName: \"kubernetes.io/projected/24129778-5491-4376-949d-5a4f7344b60a-kube-api-access-dpw5m\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.026242 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.026250 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24129778-5491-4376-949d-5a4f7344b60a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.047642 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.128193 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.877919 4769 generic.go:334] "Generic (PLEG): container finished" podID="4fe97d92-0aaa-4558-9cd2-892355c83d5e" containerID="635e31c0a7303277dad2663ebcc13d02f7d7ccf35b3d231fb2f29c3f04f8fe30" exitCode=0 Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.878274 4769 generic.go:334] "Generic (PLEG): container finished" podID="4fe97d92-0aaa-4558-9cd2-892355c83d5e" containerID="b1c69624f8a28e7bcbbc068ea3d942e3a4e9dd2240ed49463facc7ad480bb2c1" exitCode=143 Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.878345 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.877986 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4fe97d92-0aaa-4558-9cd2-892355c83d5e","Type":"ContainerDied","Data":"635e31c0a7303277dad2663ebcc13d02f7d7ccf35b3d231fb2f29c3f04f8fe30"} Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.878381 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4fe97d92-0aaa-4558-9cd2-892355c83d5e","Type":"ContainerDied","Data":"b1c69624f8a28e7bcbbc068ea3d942e3a4e9dd2240ed49463facc7ad480bb2c1"} Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.920225 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.928545 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.958100 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:32:15 crc kubenswrapper[4769]: E1006 07:32:15.958479 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24129778-5491-4376-949d-5a4f7344b60a" containerName="glance-log" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.958498 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="24129778-5491-4376-949d-5a4f7344b60a" containerName="glance-log" Oct 06 07:32:15 crc kubenswrapper[4769]: E1006 07:32:15.958517 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f947e31-bd5d-4571-a79f-85805377370c" containerName="init" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.958523 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f947e31-bd5d-4571-a79f-85805377370c" containerName="init" Oct 06 07:32:15 crc kubenswrapper[4769]: E1006 07:32:15.958542 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f947e31-bd5d-4571-a79f-85805377370c" containerName="dnsmasq-dns" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.958548 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f947e31-bd5d-4571-a79f-85805377370c" containerName="dnsmasq-dns" Oct 06 07:32:15 crc kubenswrapper[4769]: E1006 07:32:15.958565 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24129778-5491-4376-949d-5a4f7344b60a" containerName="glance-httpd" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.958572 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="24129778-5491-4376-949d-5a4f7344b60a" containerName="glance-httpd" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.958743 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="24129778-5491-4376-949d-5a4f7344b60a" containerName="glance-httpd" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.958764 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="24129778-5491-4376-949d-5a4f7344b60a" containerName="glance-log" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.958779 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f947e31-bd5d-4571-a79f-85805377370c" containerName="dnsmasq-dns" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.959672 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.965497 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.969163 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 06 07:32:15 crc kubenswrapper[4769]: I1006 07:32:15.969859 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.045957 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.046042 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.046067 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.046088 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-logs\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.046127 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.046241 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2s86\" (UniqueName: \"kubernetes.io/projected/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-kube-api-access-d2s86\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.046447 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.046497 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.147719 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.147809 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.147840 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.147905 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.147937 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.147966 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-logs\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.148007 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.148038 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2s86\" (UniqueName: \"kubernetes.io/projected/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-kube-api-access-d2s86\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.148785 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.149038 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.149257 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-logs\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.153936 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.153962 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.156512 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.156520 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.190594 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24129778-5491-4376-949d-5a4f7344b60a" path="/var/lib/kubelet/pods/24129778-5491-4376-949d-5a4f7344b60a/volumes" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.193501 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2s86\" (UniqueName: \"kubernetes.io/projected/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-kube-api-access-d2s86\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.212355 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:32:16 crc kubenswrapper[4769]: I1006 07:32:16.303553 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:17 crc kubenswrapper[4769]: I1006 07:32:17.895676 4769 generic.go:334] "Generic (PLEG): container finished" podID="1b7fe511-77bf-4a50-b42a-3dee332f2a69" containerID="17de8ff848c3abf1fecac3fe5324e71097eb01eec0bc31a688c2da873fc16fb3" exitCode=0 Oct 06 07:32:17 crc kubenswrapper[4769]: I1006 07:32:17.895758 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2ctln" event={"ID":"1b7fe511-77bf-4a50-b42a-3dee332f2a69","Type":"ContainerDied","Data":"17de8ff848c3abf1fecac3fe5324e71097eb01eec0bc31a688c2da873fc16fb3"} Oct 06 07:32:17 crc kubenswrapper[4769]: I1006 07:32:17.899370 4769 generic.go:334] "Generic (PLEG): container finished" podID="55fda312-faf1-4c7d-9b03-0aca23b7d5cb" containerID="7531f4e7ec0d07dd5d70e987055e7039694b6866c08666adbce1ad5ade79dc7d" exitCode=0 Oct 06 07:32:17 crc kubenswrapper[4769]: I1006 07:32:17.899454 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nl2f8" event={"ID":"55fda312-faf1-4c7d-9b03-0aca23b7d5cb","Type":"ContainerDied","Data":"7531f4e7ec0d07dd5d70e987055e7039694b6866c08666adbce1ad5ade79dc7d"} Oct 06 07:32:17 crc kubenswrapper[4769]: I1006 07:32:17.901978 4769 generic.go:334] "Generic (PLEG): container finished" podID="0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a" containerID="73fbbe7109399078438588082acac8981570a0e79d33bdf28bfce22f9f79aefd" exitCode=0 Oct 06 07:32:17 crc kubenswrapper[4769]: I1006 07:32:17.902013 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ll86w" event={"ID":"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a","Type":"ContainerDied","Data":"73fbbe7109399078438588082acac8981570a0e79d33bdf28bfce22f9f79aefd"} Oct 06 07:32:22 crc kubenswrapper[4769]: I1006 07:32:22.246089 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:32:22 crc kubenswrapper[4769]: I1006 07:32:22.246717 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:32:22 crc kubenswrapper[4769]: I1006 07:32:22.246762 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:32:22 crc kubenswrapper[4769]: I1006 07:32:22.247276 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e27a02d72597d106a59d046fdaac952f8bf136a5f46d0da4b4605986d1c55ede"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 07:32:22 crc kubenswrapper[4769]: I1006 07:32:22.247343 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://e27a02d72597d106a59d046fdaac952f8bf136a5f46d0da4b4605986d1c55ede" gracePeriod=600 Oct 06 07:32:22 crc kubenswrapper[4769]: I1006 07:32:22.967246 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="e27a02d72597d106a59d046fdaac952f8bf136a5f46d0da4b4605986d1c55ede" exitCode=0 Oct 06 07:32:22 crc kubenswrapper[4769]: I1006 07:32:22.967292 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"e27a02d72597d106a59d046fdaac952f8bf136a5f46d0da4b4605986d1c55ede"} Oct 06 07:32:22 crc kubenswrapper[4769]: I1006 07:32:22.967325 4769 scope.go:117] "RemoveContainer" containerID="d8e933a4c91e9e393dfdc1657ae1bf35eec6b067aa89c28817b282eafc59971e" Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.174002 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.280941 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-combined-ca-bundle\") pod \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\" (UID: \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\") " Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.282176 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-db-sync-config-data\") pod \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\" (UID: \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\") " Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.282766 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmv5l\" (UniqueName: \"kubernetes.io/projected/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-kube-api-access-kmv5l\") pod \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\" (UID: \"55fda312-faf1-4c7d-9b03-0aca23b7d5cb\") " Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.287608 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "55fda312-faf1-4c7d-9b03-0aca23b7d5cb" (UID: "55fda312-faf1-4c7d-9b03-0aca23b7d5cb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.288638 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-kube-api-access-kmv5l" (OuterVolumeSpecName: "kube-api-access-kmv5l") pod "55fda312-faf1-4c7d-9b03-0aca23b7d5cb" (UID: "55fda312-faf1-4c7d-9b03-0aca23b7d5cb"). InnerVolumeSpecName "kube-api-access-kmv5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.314589 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55fda312-faf1-4c7d-9b03-0aca23b7d5cb" (UID: "55fda312-faf1-4c7d-9b03-0aca23b7d5cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.385000 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmv5l\" (UniqueName: \"kubernetes.io/projected/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-kube-api-access-kmv5l\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.385057 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.385068 4769 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/55fda312-faf1-4c7d-9b03-0aca23b7d5cb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.996554 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nl2f8" event={"ID":"55fda312-faf1-4c7d-9b03-0aca23b7d5cb","Type":"ContainerDied","Data":"ba0817f4f936e4202dfb3a212bf8aa3f2e475fbdfce62a28116e07342e8b8394"} Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.996899 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba0817f4f936e4202dfb3a212bf8aa3f2e475fbdfce62a28116e07342e8b8394" Oct 06 07:32:24 crc kubenswrapper[4769]: I1006 07:32:24.996980 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nl2f8" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.521601 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5c48f574cf-j5sgk"] Oct 06 07:32:25 crc kubenswrapper[4769]: E1006 07:32:25.523538 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55fda312-faf1-4c7d-9b03-0aca23b7d5cb" containerName="barbican-db-sync" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.523680 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="55fda312-faf1-4c7d-9b03-0aca23b7d5cb" containerName="barbican-db-sync" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.524010 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="55fda312-faf1-4c7d-9b03-0aca23b7d5cb" containerName="barbican-db-sync" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.533813 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.544942 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.545160 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-l9rjm" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.547561 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.592492 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c48f574cf-j5sgk"] Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.621787 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-77b6b965c4-r6bn4"] Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.624446 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.627595 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.687466 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77b6b965c4-r6bn4"] Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.735283 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc1704d-575e-4d1b-a09b-faac26f1faf2-logs\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.735353 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc1704d-575e-4d1b-a09b-faac26f1faf2-config-data\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.735376 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a39b78c-9254-4275-b3bd-3fc0f137272f-combined-ca-bundle\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.735396 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgrb8\" (UniqueName: \"kubernetes.io/projected/cbc1704d-575e-4d1b-a09b-faac26f1faf2-kube-api-access-hgrb8\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.735432 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a39b78c-9254-4275-b3bd-3fc0f137272f-config-data-custom\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.735461 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc1704d-575e-4d1b-a09b-faac26f1faf2-combined-ca-bundle\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.735500 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28vdn\" (UniqueName: \"kubernetes.io/projected/8a39b78c-9254-4275-b3bd-3fc0f137272f-kube-api-access-28vdn\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.735515 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a39b78c-9254-4275-b3bd-3fc0f137272f-logs\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.735541 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a39b78c-9254-4275-b3bd-3fc0f137272f-config-data\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.735567 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbc1704d-575e-4d1b-a09b-faac26f1faf2-config-data-custom\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.763581 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66bc8796b9-ggdhr"] Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.765050 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.782077 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66bc8796b9-ggdhr"] Oct 06 07:32:25 crc kubenswrapper[4769]: E1006 07:32:25.792697 4769 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:d018fe05595a319a521aca6a2235ba72" Oct 06 07:32:25 crc kubenswrapper[4769]: E1006 07:32:25.792762 4769 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:d018fe05595a319a521aca6a2235ba72" Oct 06 07:32:25 crc kubenswrapper[4769]: E1006 07:32:25.792910 4769 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:d018fe05595a319a521aca6a2235ba72,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2vhg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qtgml_openstack(1dc18379-7117-430b-9d0f-65115eaedf51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 06 07:32:25 crc kubenswrapper[4769]: E1006 07:32:25.794078 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qtgml" podUID="1dc18379-7117-430b-9d0f-65115eaedf51" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837175 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc1704d-575e-4d1b-a09b-faac26f1faf2-logs\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837238 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4779v\" (UniqueName: \"kubernetes.io/projected/33885605-47e9-415c-b311-4797724a7e11-kube-api-access-4779v\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837258 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc1704d-575e-4d1b-a09b-faac26f1faf2-config-data\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837280 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-config\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837303 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a39b78c-9254-4275-b3bd-3fc0f137272f-combined-ca-bundle\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837320 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgrb8\" (UniqueName: \"kubernetes.io/projected/cbc1704d-575e-4d1b-a09b-faac26f1faf2-kube-api-access-hgrb8\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837341 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-dns-swift-storage-0\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837359 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a39b78c-9254-4275-b3bd-3fc0f137272f-config-data-custom\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837390 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc1704d-575e-4d1b-a09b-faac26f1faf2-combined-ca-bundle\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837415 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-ovsdbserver-sb\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837449 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-ovsdbserver-nb\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837471 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28vdn\" (UniqueName: \"kubernetes.io/projected/8a39b78c-9254-4275-b3bd-3fc0f137272f-kube-api-access-28vdn\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837488 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a39b78c-9254-4275-b3bd-3fc0f137272f-logs\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837541 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a39b78c-9254-4275-b3bd-3fc0f137272f-config-data\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837565 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbc1704d-575e-4d1b-a09b-faac26f1faf2-config-data-custom\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837581 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-dns-svc\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.837965 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbc1704d-575e-4d1b-a09b-faac26f1faf2-logs\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.839368 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a39b78c-9254-4275-b3bd-3fc0f137272f-logs\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.845044 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a39b78c-9254-4275-b3bd-3fc0f137272f-combined-ca-bundle\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.846717 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a39b78c-9254-4275-b3bd-3fc0f137272f-config-data-custom\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.846723 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc1704d-575e-4d1b-a09b-faac26f1faf2-combined-ca-bundle\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.846835 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cbc1704d-575e-4d1b-a09b-faac26f1faf2-config-data-custom\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.855996 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a39b78c-9254-4275-b3bd-3fc0f137272f-config-data\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.856740 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbc1704d-575e-4d1b-a09b-faac26f1faf2-config-data\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.868124 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76647d8c4d-8r7dp"] Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.869430 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.879804 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.909020 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76647d8c4d-8r7dp"] Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.921268 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgrb8\" (UniqueName: \"kubernetes.io/projected/cbc1704d-575e-4d1b-a09b-faac26f1faf2-kube-api-access-hgrb8\") pod \"barbican-worker-5c48f574cf-j5sgk\" (UID: \"cbc1704d-575e-4d1b-a09b-faac26f1faf2\") " pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.921267 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28vdn\" (UniqueName: \"kubernetes.io/projected/8a39b78c-9254-4275-b3bd-3fc0f137272f-kube-api-access-28vdn\") pod \"barbican-keystone-listener-77b6b965c4-r6bn4\" (UID: \"8a39b78c-9254-4275-b3bd-3fc0f137272f\") " pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.940659 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-config-data-custom\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.940707 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4779v\" (UniqueName: \"kubernetes.io/projected/33885605-47e9-415c-b311-4797724a7e11-kube-api-access-4779v\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.940728 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-config\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.940753 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-dns-swift-storage-0\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.940774 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-combined-ca-bundle\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.940792 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wvms\" (UniqueName: \"kubernetes.io/projected/11689749-b27d-441d-b7d4-9406f215e064-kube-api-access-8wvms\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.940811 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11689749-b27d-441d-b7d4-9406f215e064-logs\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.940847 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-ovsdbserver-sb\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.940867 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-ovsdbserver-nb\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.940901 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-config-data\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.940932 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-dns-svc\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.941936 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-dns-svc\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.942821 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-config\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.943320 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-dns-swift-storage-0\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.953067 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-ovsdbserver-sb\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.954528 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-ovsdbserver-nb\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.954884 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" Oct 06 07:32:25 crc kubenswrapper[4769]: I1006 07:32:25.968539 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4779v\" (UniqueName: \"kubernetes.io/projected/33885605-47e9-415c-b311-4797724a7e11-kube-api-access-4779v\") pod \"dnsmasq-dns-66bc8796b9-ggdhr\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.039924 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4fe97d92-0aaa-4558-9cd2-892355c83d5e","Type":"ContainerDied","Data":"3488fa1678210147ef0eb186f5a868fd949b5f407148939f06310b8f4a126e8b"} Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.040405 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3488fa1678210147ef0eb186f5a868fd949b5f407148939f06310b8f4a126e8b" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.043595 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-config-data\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.043726 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-config-data-custom\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.043772 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-combined-ca-bundle\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.043797 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wvms\" (UniqueName: \"kubernetes.io/projected/11689749-b27d-441d-b7d4-9406f215e064-kube-api-access-8wvms\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.043816 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11689749-b27d-441d-b7d4-9406f215e064-logs\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.044363 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11689749-b27d-441d-b7d4-9406f215e064-logs\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.053418 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-combined-ca-bundle\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.053955 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-config-data-custom\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.056007 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-ll86w" event={"ID":"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a","Type":"ContainerDied","Data":"e66a8d4b5a668bd28faf23a9ac2dc1651efedf1cf7ce3cab0ff675eeb7787bce"} Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.056025 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-config-data\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.056050 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e66a8d4b5a668bd28faf23a9ac2dc1651efedf1cf7ce3cab0ff675eeb7787bce" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.064332 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2ctln" event={"ID":"1b7fe511-77bf-4a50-b42a-3dee332f2a69","Type":"ContainerDied","Data":"6def939f790feeff11d67a5152b2f053937e35e1f57b11db8fa66d960bf4705e"} Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.064383 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6def939f790feeff11d67a5152b2f053937e35e1f57b11db8fa66d960bf4705e" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.083265 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wvms\" (UniqueName: \"kubernetes.io/projected/11689749-b27d-441d-b7d4-9406f215e064-kube-api-access-8wvms\") pod \"barbican-api-76647d8c4d-8r7dp\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.090590 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.097617 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.115243 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.122468 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 07:32:26 crc kubenswrapper[4769]: E1006 07:32:26.130262 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:d018fe05595a319a521aca6a2235ba72\\\"\"" pod="openstack/cinder-db-sync-qtgml" podUID="1dc18379-7117-430b-9d0f-65115eaedf51" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145469 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9rl6\" (UniqueName: \"kubernetes.io/projected/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-kube-api-access-b9rl6\") pod \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145545 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-scripts\") pod \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145623 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-combined-ca-bundle\") pod \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145642 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-config-data\") pod \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145666 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-config-data\") pod \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145701 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145734 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-config-data\") pod \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145777 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fe97d92-0aaa-4558-9cd2-892355c83d5e-httpd-run\") pod \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145819 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-scripts\") pod \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145873 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-fernet-keys\") pod \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145899 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fe97d92-0aaa-4558-9cd2-892355c83d5e-logs\") pod \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145965 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-credential-keys\") pod \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.145986 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-combined-ca-bundle\") pod \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\" (UID: \"0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.146025 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-scripts\") pod \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.146041 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjsnl\" (UniqueName: \"kubernetes.io/projected/1b7fe511-77bf-4a50-b42a-3dee332f2a69-kube-api-access-gjsnl\") pod \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.146060 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-combined-ca-bundle\") pod \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.146089 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b7fe511-77bf-4a50-b42a-3dee332f2a69-logs\") pod \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\" (UID: \"1b7fe511-77bf-4a50-b42a-3dee332f2a69\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.146110 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6n4w\" (UniqueName: \"kubernetes.io/projected/4fe97d92-0aaa-4558-9cd2-892355c83d5e-kube-api-access-g6n4w\") pod \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\" (UID: \"4fe97d92-0aaa-4558-9cd2-892355c83d5e\") " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.146743 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe97d92-0aaa-4558-9cd2-892355c83d5e-logs" (OuterVolumeSpecName: "logs") pod "4fe97d92-0aaa-4558-9cd2-892355c83d5e" (UID: "4fe97d92-0aaa-4558-9cd2-892355c83d5e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.147066 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fe97d92-0aaa-4558-9cd2-892355c83d5e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4fe97d92-0aaa-4558-9cd2-892355c83d5e" (UID: "4fe97d92-0aaa-4558-9cd2-892355c83d5e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.147773 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fe97d92-0aaa-4558-9cd2-892355c83d5e-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.147792 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fe97d92-0aaa-4558-9cd2-892355c83d5e-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.156485 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "4fe97d92-0aaa-4558-9cd2-892355c83d5e" (UID: "4fe97d92-0aaa-4558-9cd2-892355c83d5e"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.159133 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7fe511-77bf-4a50-b42a-3dee332f2a69-logs" (OuterVolumeSpecName: "logs") pod "1b7fe511-77bf-4a50-b42a-3dee332f2a69" (UID: "1b7fe511-77bf-4a50-b42a-3dee332f2a69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.161266 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-scripts" (OuterVolumeSpecName: "scripts") pod "4fe97d92-0aaa-4558-9cd2-892355c83d5e" (UID: "4fe97d92-0aaa-4558-9cd2-892355c83d5e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.163724 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a" (UID: "0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.165224 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fe97d92-0aaa-4558-9cd2-892355c83d5e-kube-api-access-g6n4w" (OuterVolumeSpecName: "kube-api-access-g6n4w") pod "4fe97d92-0aaa-4558-9cd2-892355c83d5e" (UID: "4fe97d92-0aaa-4558-9cd2-892355c83d5e"). InnerVolumeSpecName "kube-api-access-g6n4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.181266 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5c48f574cf-j5sgk" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.186378 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.187003 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-scripts" (OuterVolumeSpecName: "scripts") pod "1b7fe511-77bf-4a50-b42a-3dee332f2a69" (UID: "1b7fe511-77bf-4a50-b42a-3dee332f2a69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.187043 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-kube-api-access-b9rl6" (OuterVolumeSpecName: "kube-api-access-b9rl6") pod "0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a" (UID: "0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a"). InnerVolumeSpecName "kube-api-access-b9rl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.187127 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-scripts" (OuterVolumeSpecName: "scripts") pod "0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a" (UID: "0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.188931 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a" (UID: "0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.189293 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7fe511-77bf-4a50-b42a-3dee332f2a69-kube-api-access-gjsnl" (OuterVolumeSpecName: "kube-api-access-gjsnl") pod "1b7fe511-77bf-4a50-b42a-3dee332f2a69" (UID: "1b7fe511-77bf-4a50-b42a-3dee332f2a69"). InnerVolumeSpecName "kube-api-access-gjsnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.218359 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-config-data" (OuterVolumeSpecName: "config-data") pod "0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a" (UID: "0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.250636 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9rl6\" (UniqueName: \"kubernetes.io/projected/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-kube-api-access-b9rl6\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.250669 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.250680 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.250702 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.250713 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.250723 4769 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.250732 4769 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.250742 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.250754 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjsnl\" (UniqueName: \"kubernetes.io/projected/1b7fe511-77bf-4a50-b42a-3dee332f2a69-kube-api-access-gjsnl\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.250763 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b7fe511-77bf-4a50-b42a-3dee332f2a69-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.250773 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6n4w\" (UniqueName: \"kubernetes.io/projected/4fe97d92-0aaa-4558-9cd2-892355c83d5e-kube-api-access-g6n4w\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.251408 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-config-data" (OuterVolumeSpecName: "config-data") pod "4fe97d92-0aaa-4558-9cd2-892355c83d5e" (UID: "4fe97d92-0aaa-4558-9cd2-892355c83d5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.251657 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fe97d92-0aaa-4558-9cd2-892355c83d5e" (UID: "4fe97d92-0aaa-4558-9cd2-892355c83d5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.254886 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-config-data" (OuterVolumeSpecName: "config-data") pod "1b7fe511-77bf-4a50-b42a-3dee332f2a69" (UID: "1b7fe511-77bf-4a50-b42a-3dee332f2a69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.263310 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a" (UID: "0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.308392 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.367053 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.367088 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.367104 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.367115 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.367127 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fe97d92-0aaa-4558-9cd2-892355c83d5e-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.387193 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b7fe511-77bf-4a50-b42a-3dee332f2a69" (UID: "1b7fe511-77bf-4a50-b42a-3dee332f2a69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.468325 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7fe511-77bf-4a50-b42a-3dee332f2a69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.691408 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-77b6b965c4-r6bn4"] Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.794204 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.809071 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66bc8796b9-ggdhr"] Oct 06 07:32:26 crc kubenswrapper[4769]: W1006 07:32:26.810070 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea2bd31c_b5f9_41c5_ae05_ed8837acc76d.slice/crio-5999fa526c7bc55cb7034d831f1906f122d91504a3dc34201aa0d0873a933ff2 WatchSource:0}: Error finding container 5999fa526c7bc55cb7034d831f1906f122d91504a3dc34201aa0d0873a933ff2: Status 404 returned error can't find the container with id 5999fa526c7bc55cb7034d831f1906f122d91504a3dc34201aa0d0873a933ff2 Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.905270 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5c48f574cf-j5sgk"] Oct 06 07:32:26 crc kubenswrapper[4769]: I1006 07:32:26.911758 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76647d8c4d-8r7dp"] Oct 06 07:32:26 crc kubenswrapper[4769]: W1006 07:32:26.930491 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11689749_b27d_441d_b7d4_9406f215e064.slice/crio-60ae3b22c55f391f84d19c4ce7dd076a4a49276255643cb29ffee5eb2f5052ed WatchSource:0}: Error finding container 60ae3b22c55f391f84d19c4ce7dd076a4a49276255643cb29ffee5eb2f5052ed: Status 404 returned error can't find the container with id 60ae3b22c55f391f84d19c4ce7dd076a4a49276255643cb29ffee5eb2f5052ed Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.092751 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"47403349fa0900a44adea3739e07d33b0b59adbbdb84ed4f63521b0ae42276d3"} Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.098457 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c48f574cf-j5sgk" event={"ID":"cbc1704d-575e-4d1b-a09b-faac26f1faf2","Type":"ContainerStarted","Data":"80a34f93f43d3adb04395fd7d3fc6c5791a6b24efad73ea82ab0c7253def6a9e"} Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.101005 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" event={"ID":"33885605-47e9-415c-b311-4797724a7e11","Type":"ContainerStarted","Data":"3fa126758aef925ed73220ddd98f5607f20217b9cb8920468e95b168380aea60"} Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.103810 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d","Type":"ContainerStarted","Data":"5999fa526c7bc55cb7034d831f1906f122d91504a3dc34201aa0d0873a933ff2"} Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.105008 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" event={"ID":"8a39b78c-9254-4275-b3bd-3fc0f137272f","Type":"ContainerStarted","Data":"976eef4a7cede8ecc346fd0dd8b7c1f6015f766cc588135a8d3c86fec519c09d"} Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.113451 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.113463 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-ll86w" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.113494 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76647d8c4d-8r7dp" event={"ID":"11689749-b27d-441d-b7d4-9406f215e064","Type":"ContainerStarted","Data":"60ae3b22c55f391f84d19c4ce7dd076a4a49276255643cb29ffee5eb2f5052ed"} Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.115734 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2ctln" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.197611 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.220786 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.236629 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:32:27 crc kubenswrapper[4769]: E1006 07:32:27.237058 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe97d92-0aaa-4558-9cd2-892355c83d5e" containerName="glance-log" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.237080 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe97d92-0aaa-4558-9cd2-892355c83d5e" containerName="glance-log" Oct 06 07:32:27 crc kubenswrapper[4769]: E1006 07:32:27.237095 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7fe511-77bf-4a50-b42a-3dee332f2a69" containerName="placement-db-sync" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.237101 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7fe511-77bf-4a50-b42a-3dee332f2a69" containerName="placement-db-sync" Oct 06 07:32:27 crc kubenswrapper[4769]: E1006 07:32:27.237112 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a" containerName="keystone-bootstrap" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.237117 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a" containerName="keystone-bootstrap" Oct 06 07:32:27 crc kubenswrapper[4769]: E1006 07:32:27.237130 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fe97d92-0aaa-4558-9cd2-892355c83d5e" containerName="glance-httpd" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.237135 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fe97d92-0aaa-4558-9cd2-892355c83d5e" containerName="glance-httpd" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.237316 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a" containerName="keystone-bootstrap" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.237326 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7fe511-77bf-4a50-b42a-3dee332f2a69" containerName="placement-db-sync" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.237342 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe97d92-0aaa-4558-9cd2-892355c83d5e" containerName="glance-log" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.237355 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fe97d92-0aaa-4558-9cd2-892355c83d5e" containerName="glance-httpd" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.238336 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.240685 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.241283 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.252009 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-655c85cd69-rkn7x"] Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.253758 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.260233 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.260271 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.260510 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.260701 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.260869 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wgm6x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.261216 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.266468 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.274609 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-655c85cd69-rkn7x"] Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.288873 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-combined-ca-bundle\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.288937 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4htv8\" (UniqueName: \"kubernetes.io/projected/72492080-7681-4cf3-b84b-5a4d33f529df-kube-api-access-4htv8\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.288975 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.288991 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.289009 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.289033 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-scripts\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.289184 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-public-tls-certs\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.289217 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-fernet-keys\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.289237 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.289275 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-config-data\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.289333 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-internal-tls-certs\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.289367 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-credential-keys\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.289407 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zrkw\" (UniqueName: \"kubernetes.io/projected/fe89aad5-df76-421d-a674-1dc37939f1f4-kube-api-access-5zrkw\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.289454 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe89aad5-df76-421d-a674-1dc37939f1f4-logs\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.289566 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.289704 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe89aad5-df76-421d-a674-1dc37939f1f4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.323046 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9b886c44b-zxp8t"] Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.324757 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.330790 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.330882 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.330979 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.331032 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2k6x8" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.331169 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.343667 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9b886c44b-zxp8t"] Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.390995 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-public-tls-certs\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391032 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-fernet-keys\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391051 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391070 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-config-data\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391092 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-internal-tls-certs\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391108 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-credential-keys\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391127 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-config-data\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391154 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zrkw\" (UniqueName: \"kubernetes.io/projected/fe89aad5-df76-421d-a674-1dc37939f1f4-kube-api-access-5zrkw\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391174 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe89aad5-df76-421d-a674-1dc37939f1f4-logs\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391194 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391211 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-scripts\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391238 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-logs\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391265 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-internal-tls-certs\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391283 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe89aad5-df76-421d-a674-1dc37939f1f4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391301 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-public-tls-certs\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391323 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-combined-ca-bundle\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391354 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4htv8\" (UniqueName: \"kubernetes.io/projected/72492080-7681-4cf3-b84b-5a4d33f529df-kube-api-access-4htv8\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391384 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391399 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391433 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391454 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-combined-ca-bundle\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391474 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-scripts\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.391499 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gghgq\" (UniqueName: \"kubernetes.io/projected/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-kube-api-access-gghgq\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.392058 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe89aad5-df76-421d-a674-1dc37939f1f4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.392662 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe89aad5-df76-421d-a674-1dc37939f1f4-logs\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.392800 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.397641 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.397782 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-public-tls-certs\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.398348 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.399390 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-combined-ca-bundle\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.401377 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-scripts\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.401653 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-internal-tls-certs\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.401714 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-fernet-keys\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.401734 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-credential-keys\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.403902 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72492080-7681-4cf3-b84b-5a4d33f529df-config-data\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.407207 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.408044 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4htv8\" (UniqueName: \"kubernetes.io/projected/72492080-7681-4cf3-b84b-5a4d33f529df-kube-api-access-4htv8\") pod \"keystone-655c85cd69-rkn7x\" (UID: \"72492080-7681-4cf3-b84b-5a4d33f529df\") " pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.408678 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.411868 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zrkw\" (UniqueName: \"kubernetes.io/projected/fe89aad5-df76-421d-a674-1dc37939f1f4-kube-api-access-5zrkw\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.477552 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.494053 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-combined-ca-bundle\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.494107 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gghgq\" (UniqueName: \"kubernetes.io/projected/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-kube-api-access-gghgq\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.494141 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-config-data\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.494179 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-scripts\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.494204 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-logs\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.494229 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-internal-tls-certs\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.494249 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-public-tls-certs\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.495149 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-logs\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.499275 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-public-tls-certs\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.499704 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-config-data\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.501689 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-internal-tls-certs\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.504129 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-combined-ca-bundle\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.513415 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gghgq\" (UniqueName: \"kubernetes.io/projected/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-kube-api-access-gghgq\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.520980 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/088c3631-1983-4fb6-8e21-0c5b6f7b11c2-scripts\") pod \"placement-9b886c44b-zxp8t\" (UID: \"088c3631-1983-4fb6-8e21-0c5b6f7b11c2\") " pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.575053 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.587311 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:27 crc kubenswrapper[4769]: I1006 07:32:27.643987 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:28 crc kubenswrapper[4769]: I1006 07:32:28.200602 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fe97d92-0aaa-4558-9cd2-892355c83d5e" path="/var/lib/kubelet/pods/4fe97d92-0aaa-4558-9cd2-892355c83d5e/volumes" Oct 06 07:32:28 crc kubenswrapper[4769]: I1006 07:32:28.202020 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d","Type":"ContainerStarted","Data":"2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f"} Oct 06 07:32:28 crc kubenswrapper[4769]: I1006 07:32:28.225327 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76647d8c4d-8r7dp" event={"ID":"11689749-b27d-441d-b7d4-9406f215e064","Type":"ContainerStarted","Data":"decc0a2fe76868204b241fea47efe54f52b45d981a45a678cd9bcef1b82ad023"} Oct 06 07:32:28 crc kubenswrapper[4769]: I1006 07:32:28.225383 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76647d8c4d-8r7dp" event={"ID":"11689749-b27d-441d-b7d4-9406f215e064","Type":"ContainerStarted","Data":"b25ea06e1f09d3dd3677d778a0daf08e6e90c6da8d696daada3c2547167dd0a5"} Oct 06 07:32:28 crc kubenswrapper[4769]: I1006 07:32:28.226885 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:28 crc kubenswrapper[4769]: I1006 07:32:28.226920 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:28 crc kubenswrapper[4769]: I1006 07:32:28.240139 4769 generic.go:334] "Generic (PLEG): container finished" podID="33885605-47e9-415c-b311-4797724a7e11" containerID="3d84f50ca513d2962cc007d066d7f5ce8ef8d729daa1001ca51b654740e8cfb9" exitCode=0 Oct 06 07:32:28 crc kubenswrapper[4769]: I1006 07:32:28.241078 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" event={"ID":"33885605-47e9-415c-b311-4797724a7e11","Type":"ContainerDied","Data":"3d84f50ca513d2962cc007d066d7f5ce8ef8d729daa1001ca51b654740e8cfb9"} Oct 06 07:32:28 crc kubenswrapper[4769]: I1006 07:32:28.345030 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76647d8c4d-8r7dp" podStartSLOduration=3.34501299 podStartE2EDuration="3.34501299s" podCreationTimestamp="2025-10-06 07:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:28.28501745 +0000 UTC m=+944.809298607" watchObservedRunningTime="2025-10-06 07:32:28.34501299 +0000 UTC m=+944.869294137" Oct 06 07:32:28 crc kubenswrapper[4769]: I1006 07:32:28.358809 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-655c85cd69-rkn7x"] Oct 06 07:32:28 crc kubenswrapper[4769]: I1006 07:32:28.475835 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:32:28 crc kubenswrapper[4769]: I1006 07:32:28.497298 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9b886c44b-zxp8t"] Oct 06 07:32:28 crc kubenswrapper[4769]: W1006 07:32:28.767859 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod088c3631_1983_4fb6_8e21_0c5b6f7b11c2.slice/crio-c598bb02acd960765d33d2812275f0e4767a54b3f516ba3b6018be56f0212eec WatchSource:0}: Error finding container c598bb02acd960765d33d2812275f0e4767a54b3f516ba3b6018be56f0212eec: Status 404 returned error can't find the container with id c598bb02acd960765d33d2812275f0e4767a54b3f516ba3b6018be56f0212eec Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.082970 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58cc998976-964nm"] Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.084270 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.091722 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.091987 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.106963 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58cc998976-964nm"] Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.145934 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-config-data-custom\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.146045 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-internal-tls-certs\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.146073 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52bdd792-befb-4b73-b231-9e8301f3806a-logs\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.146109 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-combined-ca-bundle\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.146131 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djhzt\" (UniqueName: \"kubernetes.io/projected/52bdd792-befb-4b73-b231-9e8301f3806a-kube-api-access-djhzt\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.146178 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-config-data\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.146223 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-public-tls-certs\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.247311 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-combined-ca-bundle\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.247354 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djhzt\" (UniqueName: \"kubernetes.io/projected/52bdd792-befb-4b73-b231-9e8301f3806a-kube-api-access-djhzt\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.247401 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-config-data\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.247451 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-public-tls-certs\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.247532 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-config-data-custom\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.247584 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-internal-tls-certs\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.247607 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52bdd792-befb-4b73-b231-9e8301f3806a-logs\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.248935 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52bdd792-befb-4b73-b231-9e8301f3806a-logs\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.264371 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-internal-tls-certs\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.265497 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b886c44b-zxp8t" event={"ID":"088c3631-1983-4fb6-8e21-0c5b6f7b11c2","Type":"ContainerStarted","Data":"c598bb02acd960765d33d2812275f0e4767a54b3f516ba3b6018be56f0212eec"} Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.265733 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-config-data-custom\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.266575 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-combined-ca-bundle\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.267043 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-config-data\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.270362 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djhzt\" (UniqueName: \"kubernetes.io/projected/52bdd792-befb-4b73-b231-9e8301f3806a-kube-api-access-djhzt\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.271045 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d","Type":"ContainerStarted","Data":"da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce"} Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.271403 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/52bdd792-befb-4b73-b231-9e8301f3806a-public-tls-certs\") pod \"barbican-api-58cc998976-964nm\" (UID: \"52bdd792-befb-4b73-b231-9e8301f3806a\") " pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.272305 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-655c85cd69-rkn7x" event={"ID":"72492080-7681-4cf3-b84b-5a4d33f529df","Type":"ContainerStarted","Data":"af0fc8afca315a76020ded2c0d0a62a70d1375c4c814cb7797afb0ee84ba873d"} Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.274024 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe89aad5-df76-421d-a674-1dc37939f1f4","Type":"ContainerStarted","Data":"510c99935126fb1b9a5878f3813e1431ba346d0b8a347793dda61fb2f5ffed2a"} Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.295341 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=14.29531863 podStartE2EDuration="14.29531863s" podCreationTimestamp="2025-10-06 07:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:29.295314039 +0000 UTC m=+945.819595186" watchObservedRunningTime="2025-10-06 07:32:29.29531863 +0000 UTC m=+945.819599777" Oct 06 07:32:29 crc kubenswrapper[4769]: I1006 07:32:29.413208 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:30 crc kubenswrapper[4769]: I1006 07:32:30.283697 4769 generic.go:334] "Generic (PLEG): container finished" podID="7d89b00e-b803-4c22-b820-653e98f239b0" containerID="71eaac04018cfb7703bfb9c2a0ba922686e6516ce46b112a11e28376db44f53f" exitCode=0 Oct 06 07:32:30 crc kubenswrapper[4769]: I1006 07:32:30.283825 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-thnwc" event={"ID":"7d89b00e-b803-4c22-b820-653e98f239b0","Type":"ContainerDied","Data":"71eaac04018cfb7703bfb9c2a0ba922686e6516ce46b112a11e28376db44f53f"} Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.328237 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-thnwc" event={"ID":"7d89b00e-b803-4c22-b820-653e98f239b0","Type":"ContainerDied","Data":"1ee8172a1bf388c3b5fb0d54e8840853dbe8ece8ce3a1b0aba2111a56802a776"} Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.329015 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ee8172a1bf388c3b5fb0d54e8840853dbe8ece8ce3a1b0aba2111a56802a776" Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.395335 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-thnwc" Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.425589 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d89b00e-b803-4c22-b820-653e98f239b0-config\") pod \"7d89b00e-b803-4c22-b820-653e98f239b0\" (UID: \"7d89b00e-b803-4c22-b820-653e98f239b0\") " Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.425776 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd59p\" (UniqueName: \"kubernetes.io/projected/7d89b00e-b803-4c22-b820-653e98f239b0-kube-api-access-qd59p\") pod \"7d89b00e-b803-4c22-b820-653e98f239b0\" (UID: \"7d89b00e-b803-4c22-b820-653e98f239b0\") " Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.426004 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d89b00e-b803-4c22-b820-653e98f239b0-combined-ca-bundle\") pod \"7d89b00e-b803-4c22-b820-653e98f239b0\" (UID: \"7d89b00e-b803-4c22-b820-653e98f239b0\") " Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.436523 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d89b00e-b803-4c22-b820-653e98f239b0-kube-api-access-qd59p" (OuterVolumeSpecName: "kube-api-access-qd59p") pod "7d89b00e-b803-4c22-b820-653e98f239b0" (UID: "7d89b00e-b803-4c22-b820-653e98f239b0"). InnerVolumeSpecName "kube-api-access-qd59p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.501899 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d89b00e-b803-4c22-b820-653e98f239b0-config" (OuterVolumeSpecName: "config") pod "7d89b00e-b803-4c22-b820-653e98f239b0" (UID: "7d89b00e-b803-4c22-b820-653e98f239b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.529145 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd59p\" (UniqueName: \"kubernetes.io/projected/7d89b00e-b803-4c22-b820-653e98f239b0-kube-api-access-qd59p\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.529262 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d89b00e-b803-4c22-b820-653e98f239b0-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.577657 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d89b00e-b803-4c22-b820-653e98f239b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d89b00e-b803-4c22-b820-653e98f239b0" (UID: "7d89b00e-b803-4c22-b820-653e98f239b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.630639 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d89b00e-b803-4c22-b820-653e98f239b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:32 crc kubenswrapper[4769]: I1006 07:32:32.742865 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58cc998976-964nm"] Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.344315 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe89aad5-df76-421d-a674-1dc37939f1f4","Type":"ContainerStarted","Data":"8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a"} Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.347362 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" event={"ID":"33885605-47e9-415c-b311-4797724a7e11","Type":"ContainerStarted","Data":"1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721"} Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.347468 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.359547 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b886c44b-zxp8t" event={"ID":"088c3631-1983-4fb6-8e21-0c5b6f7b11c2","Type":"ContainerStarted","Data":"5fc24beecfe1b8c2bbe2a4e9dcebb8570764f1bf21f3cd5ed963f9f7308fdde4"} Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.359629 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b886c44b-zxp8t" event={"ID":"088c3631-1983-4fb6-8e21-0c5b6f7b11c2","Type":"ContainerStarted","Data":"3d607f6b8e76b3ba8d6934f8e058f7afbd8828b85a4fa13b314402a6da231310"} Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.359655 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.359695 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.364538 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" podStartSLOduration=8.364522149 podStartE2EDuration="8.364522149s" podCreationTimestamp="2025-10-06 07:32:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:33.363775529 +0000 UTC m=+949.888056676" watchObservedRunningTime="2025-10-06 07:32:33.364522149 +0000 UTC m=+949.888803296" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.366261 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" event={"ID":"8a39b78c-9254-4275-b3bd-3fc0f137272f","Type":"ContainerStarted","Data":"74e4ec4c413a1e37d4cb4c48a3cb58bcec903a5a195852e795b2f107aba8d14d"} Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.366345 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" event={"ID":"8a39b78c-9254-4275-b3bd-3fc0f137272f","Type":"ContainerStarted","Data":"d42d348b2a5423425e11bdef5ac17116f34290b7d3a122e50c58dc7ed802a6da"} Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.369175 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf","Type":"ContainerStarted","Data":"557a2cb62dfad90b655bae50c5be15db07d45329539b24cb2a0a0471433a4089"} Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.371244 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58cc998976-964nm" event={"ID":"52bdd792-befb-4b73-b231-9e8301f3806a","Type":"ContainerStarted","Data":"3afe3782217bdf5acd2cf068f389989b89d2592690c8826767a1f859f1cc25aa"} Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.371302 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58cc998976-964nm" event={"ID":"52bdd792-befb-4b73-b231-9e8301f3806a","Type":"ContainerStarted","Data":"114e5b0f59a7353372984177daa7d3f8baff571e43bd1ee1e3500b5e6968f2b0"} Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.373317 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c48f574cf-j5sgk" event={"ID":"cbc1704d-575e-4d1b-a09b-faac26f1faf2","Type":"ContainerStarted","Data":"de35de8c6af074c6e2db0210cd725e9afe54de7f7e44a42f0dcbe30d4a812293"} Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.373347 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5c48f574cf-j5sgk" event={"ID":"cbc1704d-575e-4d1b-a09b-faac26f1faf2","Type":"ContainerStarted","Data":"fb8a08bd12649ca909538e45f05d5b472e26df6ae4ff14ae0546c9fcbaa7e091"} Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.376448 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-thnwc" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.377560 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-655c85cd69-rkn7x" event={"ID":"72492080-7681-4cf3-b84b-5a4d33f529df","Type":"ContainerStarted","Data":"2e770de0eb6a575c39707869605cccb2a70bbe6196b725ce7be08e8ce699f633"} Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.377689 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.389313 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9b886c44b-zxp8t" podStartSLOduration=6.3892929259999995 podStartE2EDuration="6.389292926s" podCreationTimestamp="2025-10-06 07:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:33.387772995 +0000 UTC m=+949.912054142" watchObservedRunningTime="2025-10-06 07:32:33.389292926 +0000 UTC m=+949.913574073" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.414004 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-77b6b965c4-r6bn4" podStartSLOduration=2.89234546 podStartE2EDuration="8.413989461s" podCreationTimestamp="2025-10-06 07:32:25 +0000 UTC" firstStartedPulling="2025-10-06 07:32:26.707590964 +0000 UTC m=+943.231872111" lastFinishedPulling="2025-10-06 07:32:32.229234965 +0000 UTC m=+948.753516112" observedRunningTime="2025-10-06 07:32:33.410115766 +0000 UTC m=+949.934396933" watchObservedRunningTime="2025-10-06 07:32:33.413989461 +0000 UTC m=+949.938270608" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.438734 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-655c85cd69-rkn7x" podStartSLOduration=6.438716607 podStartE2EDuration="6.438716607s" podCreationTimestamp="2025-10-06 07:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:33.433614998 +0000 UTC m=+949.957896155" watchObservedRunningTime="2025-10-06 07:32:33.438716607 +0000 UTC m=+949.962997754" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.459940 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5c48f574cf-j5sgk" podStartSLOduration=3.102209635 podStartE2EDuration="8.459881865s" podCreationTimestamp="2025-10-06 07:32:25 +0000 UTC" firstStartedPulling="2025-10-06 07:32:26.937539218 +0000 UTC m=+943.461820365" lastFinishedPulling="2025-10-06 07:32:32.295211448 +0000 UTC m=+948.819492595" observedRunningTime="2025-10-06 07:32:33.456182824 +0000 UTC m=+949.980463971" watchObservedRunningTime="2025-10-06 07:32:33.459881865 +0000 UTC m=+949.984163022" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.707674 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66bc8796b9-ggdhr"] Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.744415 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cf986bc8f-wzfq2"] Oct 06 07:32:33 crc kubenswrapper[4769]: E1006 07:32:33.744775 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d89b00e-b803-4c22-b820-653e98f239b0" containerName="neutron-db-sync" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.744790 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d89b00e-b803-4c22-b820-653e98f239b0" containerName="neutron-db-sync" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.744974 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d89b00e-b803-4c22-b820-653e98f239b0" containerName="neutron-db-sync" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.745789 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.770977 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cf986bc8f-wzfq2"] Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.835491 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5dd985fb44-8mpq5"] Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.837925 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.851022 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.851111 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.851604 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4dvbm" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.851811 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.853254 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.853521 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.858092 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hsb4\" (UniqueName: \"kubernetes.io/projected/29620dea-acc5-4992-8a71-c4254e7b8a22-kube-api-access-8hsb4\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.858183 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-dns-swift-storage-0\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.858246 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-config\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.858277 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-dns-svc\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.900132 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dd985fb44-8mpq5"] Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.961769 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hsb4\" (UniqueName: \"kubernetes.io/projected/29620dea-acc5-4992-8a71-c4254e7b8a22-kube-api-access-8hsb4\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.961826 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-dns-swift-storage-0\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.962099 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-config\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.962124 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-config\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.962143 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-dns-svc\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.962169 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-httpd-config\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.962192 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.962356 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-combined-ca-bundle\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.962456 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.962523 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l6wx\" (UniqueName: \"kubernetes.io/projected/7faa379c-e24f-4820-87d8-4e94e641f298-kube-api-access-2l6wx\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.962572 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-ovndb-tls-certs\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.962780 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-dns-swift-storage-0\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.967459 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.967763 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-config\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.973183 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:33 crc kubenswrapper[4769]: I1006 07:32:33.986983 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-dns-svc\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.007309 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hsb4\" (UniqueName: \"kubernetes.io/projected/29620dea-acc5-4992-8a71-c4254e7b8a22-kube-api-access-8hsb4\") pod \"dnsmasq-dns-5cf986bc8f-wzfq2\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.064565 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.065446 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-combined-ca-bundle\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.065543 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l6wx\" (UniqueName: \"kubernetes.io/projected/7faa379c-e24f-4820-87d8-4e94e641f298-kube-api-access-2l6wx\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.065579 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-ovndb-tls-certs\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.065697 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-config\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.065764 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-httpd-config\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.072808 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-combined-ca-bundle\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.080609 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-httpd-config\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.082740 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-config\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.083162 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-ovndb-tls-certs\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.083845 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l6wx\" (UniqueName: \"kubernetes.io/projected/7faa379c-e24f-4820-87d8-4e94e641f298-kube-api-access-2l6wx\") pod \"neutron-5dd985fb44-8mpq5\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.191762 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.391774 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe89aad5-df76-421d-a674-1dc37939f1f4","Type":"ContainerStarted","Data":"012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1"} Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.402086 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58cc998976-964nm" event={"ID":"52bdd792-befb-4b73-b231-9e8301f3806a","Type":"ContainerStarted","Data":"be8e5aa39dda4f722ed274d7e9f23e91ad45abb70919bfbdf9990b86810fef8e"} Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.402124 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.403779 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.417100 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.417083673 podStartE2EDuration="7.417083673s" podCreationTimestamp="2025-10-06 07:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:34.414092541 +0000 UTC m=+950.938373688" watchObservedRunningTime="2025-10-06 07:32:34.417083673 +0000 UTC m=+950.941364820" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.446728 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58cc998976-964nm" podStartSLOduration=5.446712793 podStartE2EDuration="5.446712793s" podCreationTimestamp="2025-10-06 07:32:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:34.44079305 +0000 UTC m=+950.965074207" watchObservedRunningTime="2025-10-06 07:32:34.446712793 +0000 UTC m=+950.970993940" Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.638699 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cf986bc8f-wzfq2"] Oct 06 07:32:34 crc kubenswrapper[4769]: W1006 07:32:34.639248 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29620dea_acc5_4992_8a71_c4254e7b8a22.slice/crio-b42de48ff95e1296efaa0a842bd0609a5703218d10cccb5112047b6b68fbac0e WatchSource:0}: Error finding container b42de48ff95e1296efaa0a842bd0609a5703218d10cccb5112047b6b68fbac0e: Status 404 returned error can't find the container with id b42de48ff95e1296efaa0a842bd0609a5703218d10cccb5112047b6b68fbac0e Oct 06 07:32:34 crc kubenswrapper[4769]: I1006 07:32:34.857176 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5dd985fb44-8mpq5"] Oct 06 07:32:34 crc kubenswrapper[4769]: W1006 07:32:34.866196 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7faa379c_e24f_4820_87d8_4e94e641f298.slice/crio-7a41cad3f6e7a943c6a261ae256aacc9d731b6b365cc852a9aa0276cfd6a6455 WatchSource:0}: Error finding container 7a41cad3f6e7a943c6a261ae256aacc9d731b6b365cc852a9aa0276cfd6a6455: Status 404 returned error can't find the container with id 7a41cad3f6e7a943c6a261ae256aacc9d731b6b365cc852a9aa0276cfd6a6455 Oct 06 07:32:35 crc kubenswrapper[4769]: I1006 07:32:35.412540 4769 generic.go:334] "Generic (PLEG): container finished" podID="29620dea-acc5-4992-8a71-c4254e7b8a22" containerID="fdc99787210a6ba7f502855de74fc34d1f37fe12e0e9521b573da4dbc042876f" exitCode=0 Oct 06 07:32:35 crc kubenswrapper[4769]: I1006 07:32:35.413726 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" event={"ID":"29620dea-acc5-4992-8a71-c4254e7b8a22","Type":"ContainerDied","Data":"fdc99787210a6ba7f502855de74fc34d1f37fe12e0e9521b573da4dbc042876f"} Oct 06 07:32:35 crc kubenswrapper[4769]: I1006 07:32:35.413809 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" event={"ID":"29620dea-acc5-4992-8a71-c4254e7b8a22","Type":"ContainerStarted","Data":"b42de48ff95e1296efaa0a842bd0609a5703218d10cccb5112047b6b68fbac0e"} Oct 06 07:32:35 crc kubenswrapper[4769]: I1006 07:32:35.417188 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dd985fb44-8mpq5" event={"ID":"7faa379c-e24f-4820-87d8-4e94e641f298","Type":"ContainerStarted","Data":"8324ad13c1a3c4c51ce4233012ba5c7a5269d519deacff8bea8681abf29fb22e"} Oct 06 07:32:35 crc kubenswrapper[4769]: I1006 07:32:35.417288 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dd985fb44-8mpq5" event={"ID":"7faa379c-e24f-4820-87d8-4e94e641f298","Type":"ContainerStarted","Data":"e0568f4d060c057d1c843efbe83d0b97496a11e921085f3b85dde6fe6f7c7075"} Oct 06 07:32:35 crc kubenswrapper[4769]: I1006 07:32:35.417359 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dd985fb44-8mpq5" event={"ID":"7faa379c-e24f-4820-87d8-4e94e641f298","Type":"ContainerStarted","Data":"7a41cad3f6e7a943c6a261ae256aacc9d731b6b365cc852a9aa0276cfd6a6455"} Oct 06 07:32:35 crc kubenswrapper[4769]: I1006 07:32:35.417355 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" podUID="33885605-47e9-415c-b311-4797724a7e11" containerName="dnsmasq-dns" containerID="cri-o://1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721" gracePeriod=10 Oct 06 07:32:35 crc kubenswrapper[4769]: I1006 07:32:35.469296 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5dd985fb44-8mpq5" podStartSLOduration=2.469278607 podStartE2EDuration="2.469278607s" podCreationTimestamp="2025-10-06 07:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:35.460343482 +0000 UTC m=+951.984624629" watchObservedRunningTime="2025-10-06 07:32:35.469278607 +0000 UTC m=+951.993559754" Oct 06 07:32:35 crc kubenswrapper[4769]: I1006 07:32:35.871130 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.016234 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-ovsdbserver-nb\") pod \"33885605-47e9-415c-b311-4797724a7e11\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.016393 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-ovsdbserver-sb\") pod \"33885605-47e9-415c-b311-4797724a7e11\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.016486 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-dns-svc\") pod \"33885605-47e9-415c-b311-4797724a7e11\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.016732 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-config\") pod \"33885605-47e9-415c-b311-4797724a7e11\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.016812 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4779v\" (UniqueName: \"kubernetes.io/projected/33885605-47e9-415c-b311-4797724a7e11-kube-api-access-4779v\") pod \"33885605-47e9-415c-b311-4797724a7e11\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.016908 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-dns-swift-storage-0\") pod \"33885605-47e9-415c-b311-4797724a7e11\" (UID: \"33885605-47e9-415c-b311-4797724a7e11\") " Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.033347 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33885605-47e9-415c-b311-4797724a7e11-kube-api-access-4779v" (OuterVolumeSpecName: "kube-api-access-4779v") pod "33885605-47e9-415c-b311-4797724a7e11" (UID: "33885605-47e9-415c-b311-4797724a7e11"). InnerVolumeSpecName "kube-api-access-4779v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.067673 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "33885605-47e9-415c-b311-4797724a7e11" (UID: "33885605-47e9-415c-b311-4797724a7e11"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.068915 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "33885605-47e9-415c-b311-4797724a7e11" (UID: "33885605-47e9-415c-b311-4797724a7e11"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.072726 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "33885605-47e9-415c-b311-4797724a7e11" (UID: "33885605-47e9-415c-b311-4797724a7e11"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.083343 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "33885605-47e9-415c-b311-4797724a7e11" (UID: "33885605-47e9-415c-b311-4797724a7e11"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.087350 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-config" (OuterVolumeSpecName: "config") pod "33885605-47e9-415c-b311-4797724a7e11" (UID: "33885605-47e9-415c-b311-4797724a7e11"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.120014 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.120047 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4779v\" (UniqueName: \"kubernetes.io/projected/33885605-47e9-415c-b311-4797724a7e11-kube-api-access-4779v\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.120061 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.120071 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.120080 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.120088 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/33885605-47e9-415c-b311-4797724a7e11-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.311660 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.312171 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.361988 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.409099 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.456223 4769 generic.go:334] "Generic (PLEG): container finished" podID="33885605-47e9-415c-b311-4797724a7e11" containerID="1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721" exitCode=0 Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.456281 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" event={"ID":"33885605-47e9-415c-b311-4797724a7e11","Type":"ContainerDied","Data":"1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721"} Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.456305 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" event={"ID":"33885605-47e9-415c-b311-4797724a7e11","Type":"ContainerDied","Data":"3fa126758aef925ed73220ddd98f5607f20217b9cb8920468e95b168380aea60"} Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.456319 4769 scope.go:117] "RemoveContainer" containerID="1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.456458 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66bc8796b9-ggdhr" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.481869 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b48758c79-6xdwf"] Oct 06 07:32:36 crc kubenswrapper[4769]: E1006 07:32:36.482281 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33885605-47e9-415c-b311-4797724a7e11" containerName="dnsmasq-dns" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.482304 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="33885605-47e9-415c-b311-4797724a7e11" containerName="dnsmasq-dns" Oct 06 07:32:36 crc kubenswrapper[4769]: E1006 07:32:36.482325 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33885605-47e9-415c-b311-4797724a7e11" containerName="init" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.482333 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="33885605-47e9-415c-b311-4797724a7e11" containerName="init" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.482604 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="33885605-47e9-415c-b311-4797724a7e11" containerName="dnsmasq-dns" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.483830 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" event={"ID":"29620dea-acc5-4992-8a71-c4254e7b8a22","Type":"ContainerStarted","Data":"f92720d79c86215430b3f74500bb16184a254ebbcd84250107d554e751380ea0"} Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.483875 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.483890 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.483904 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.483915 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.484003 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.484880 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b48758c79-6xdwf"] Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.486412 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.486763 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.493448 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66bc8796b9-ggdhr"] Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.504804 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66bc8796b9-ggdhr"] Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.507778 4769 scope.go:117] "RemoveContainer" containerID="3d84f50ca513d2962cc007d066d7f5ce8ef8d729daa1001ca51b654740e8cfb9" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.525474 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" podStartSLOduration=3.525455849 podStartE2EDuration="3.525455849s" podCreationTimestamp="2025-10-06 07:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:36.517734178 +0000 UTC m=+953.042015325" watchObservedRunningTime="2025-10-06 07:32:36.525455849 +0000 UTC m=+953.049737006" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.563273 4769 scope.go:117] "RemoveContainer" containerID="1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721" Oct 06 07:32:36 crc kubenswrapper[4769]: E1006 07:32:36.573579 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721\": container with ID starting with 1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721 not found: ID does not exist" containerID="1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.573620 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721"} err="failed to get container status \"1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721\": rpc error: code = NotFound desc = could not find container \"1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721\": container with ID starting with 1ea1b500648a7cf9de120efea2ce91ea915f570b500fa8fd7495ffb4eb5f3721 not found: ID does not exist" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.573645 4769 scope.go:117] "RemoveContainer" containerID="3d84f50ca513d2962cc007d066d7f5ce8ef8d729daa1001ca51b654740e8cfb9" Oct 06 07:32:36 crc kubenswrapper[4769]: E1006 07:32:36.574573 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d84f50ca513d2962cc007d066d7f5ce8ef8d729daa1001ca51b654740e8cfb9\": container with ID starting with 3d84f50ca513d2962cc007d066d7f5ce8ef8d729daa1001ca51b654740e8cfb9 not found: ID does not exist" containerID="3d84f50ca513d2962cc007d066d7f5ce8ef8d729daa1001ca51b654740e8cfb9" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.574617 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d84f50ca513d2962cc007d066d7f5ce8ef8d729daa1001ca51b654740e8cfb9"} err="failed to get container status \"3d84f50ca513d2962cc007d066d7f5ce8ef8d729daa1001ca51b654740e8cfb9\": rpc error: code = NotFound desc = could not find container \"3d84f50ca513d2962cc007d066d7f5ce8ef8d729daa1001ca51b654740e8cfb9\": container with ID starting with 3d84f50ca513d2962cc007d066d7f5ce8ef8d729daa1001ca51b654740e8cfb9 not found: ID does not exist" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.631351 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-config\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.631461 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-internal-tls-certs\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.631484 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-ovndb-tls-certs\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.631524 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-combined-ca-bundle\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.631563 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-public-tls-certs\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.631625 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-992lt\" (UniqueName: \"kubernetes.io/projected/7bf270e0-baab-4f7d-ae98-8e3776b1518d-kube-api-access-992lt\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.631657 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-httpd-config\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.733401 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-config\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.733499 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-internal-tls-certs\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.733521 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-ovndb-tls-certs\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.733555 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-combined-ca-bundle\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.733582 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-public-tls-certs\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.733612 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-992lt\" (UniqueName: \"kubernetes.io/projected/7bf270e0-baab-4f7d-ae98-8e3776b1518d-kube-api-access-992lt\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.733637 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-httpd-config\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.739414 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-combined-ca-bundle\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.745329 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-config\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.747922 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-httpd-config\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.747941 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-internal-tls-certs\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.754054 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-ovndb-tls-certs\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.759117 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-992lt\" (UniqueName: \"kubernetes.io/projected/7bf270e0-baab-4f7d-ae98-8e3776b1518d-kube-api-access-992lt\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.760815 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bf270e0-baab-4f7d-ae98-8e3776b1518d-public-tls-certs\") pod \"neutron-5b48758c79-6xdwf\" (UID: \"7bf270e0-baab-4f7d-ae98-8e3776b1518d\") " pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:36 crc kubenswrapper[4769]: I1006 07:32:36.809464 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:37 crc kubenswrapper[4769]: I1006 07:32:37.408212 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b48758c79-6xdwf"] Oct 06 07:32:37 crc kubenswrapper[4769]: I1006 07:32:37.576380 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 07:32:37 crc kubenswrapper[4769]: I1006 07:32:37.576810 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 07:32:37 crc kubenswrapper[4769]: I1006 07:32:37.622483 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 07:32:37 crc kubenswrapper[4769]: I1006 07:32:37.638039 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 07:32:37 crc kubenswrapper[4769]: I1006 07:32:37.684905 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:37 crc kubenswrapper[4769]: I1006 07:32:37.814508 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:38 crc kubenswrapper[4769]: I1006 07:32:38.196533 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33885605-47e9-415c-b311-4797724a7e11" path="/var/lib/kubelet/pods/33885605-47e9-415c-b311-4797724a7e11/volumes" Oct 06 07:32:38 crc kubenswrapper[4769]: I1006 07:32:38.496363 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 07:32:38 crc kubenswrapper[4769]: I1006 07:32:38.496386 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 07:32:38 crc kubenswrapper[4769]: I1006 07:32:38.497457 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 07:32:38 crc kubenswrapper[4769]: I1006 07:32:38.497482 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 07:32:38 crc kubenswrapper[4769]: I1006 07:32:38.553837 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:38 crc kubenswrapper[4769]: I1006 07:32:38.560530 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 07:32:40 crc kubenswrapper[4769]: I1006 07:32:40.511756 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 07:32:40 crc kubenswrapper[4769]: I1006 07:32:40.512047 4769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 06 07:32:40 crc kubenswrapper[4769]: I1006 07:32:40.618877 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 07:32:40 crc kubenswrapper[4769]: I1006 07:32:40.678309 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 07:32:40 crc kubenswrapper[4769]: I1006 07:32:40.737054 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:41 crc kubenswrapper[4769]: I1006 07:32:41.096377 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58cc998976-964nm" Oct 06 07:32:41 crc kubenswrapper[4769]: I1006 07:32:41.164729 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76647d8c4d-8r7dp"] Oct 06 07:32:41 crc kubenswrapper[4769]: I1006 07:32:41.165004 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76647d8c4d-8r7dp" podUID="11689749-b27d-441d-b7d4-9406f215e064" containerName="barbican-api-log" containerID="cri-o://b25ea06e1f09d3dd3677d778a0daf08e6e90c6da8d696daada3c2547167dd0a5" gracePeriod=30 Oct 06 07:32:41 crc kubenswrapper[4769]: I1006 07:32:41.165150 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76647d8c4d-8r7dp" podUID="11689749-b27d-441d-b7d4-9406f215e064" containerName="barbican-api" containerID="cri-o://decc0a2fe76868204b241fea47efe54f52b45d981a45a678cd9bcef1b82ad023" gracePeriod=30 Oct 06 07:32:41 crc kubenswrapper[4769]: I1006 07:32:41.554258 4769 generic.go:334] "Generic (PLEG): container finished" podID="11689749-b27d-441d-b7d4-9406f215e064" containerID="b25ea06e1f09d3dd3677d778a0daf08e6e90c6da8d696daada3c2547167dd0a5" exitCode=143 Oct 06 07:32:41 crc kubenswrapper[4769]: I1006 07:32:41.554488 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76647d8c4d-8r7dp" event={"ID":"11689749-b27d-441d-b7d4-9406f215e064","Type":"ContainerDied","Data":"b25ea06e1f09d3dd3677d778a0daf08e6e90c6da8d696daada3c2547167dd0a5"} Oct 06 07:32:42 crc kubenswrapper[4769]: I1006 07:32:42.030267 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76647d8c4d-8r7dp" podUID="11689749-b27d-441d-b7d4-9406f215e064" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.153:9311/healthcheck\": read tcp 10.217.0.2:33004->10.217.0.153:9311: read: connection reset by peer" Oct 06 07:32:42 crc kubenswrapper[4769]: I1006 07:32:42.030333 4769 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76647d8c4d-8r7dp" podUID="11689749-b27d-441d-b7d4-9406f215e064" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.153:9311/healthcheck\": read tcp 10.217.0.2:33002->10.217.0.153:9311: read: connection reset by peer" Oct 06 07:32:42 crc kubenswrapper[4769]: I1006 07:32:42.567847 4769 generic.go:334] "Generic (PLEG): container finished" podID="11689749-b27d-441d-b7d4-9406f215e064" containerID="decc0a2fe76868204b241fea47efe54f52b45d981a45a678cd9bcef1b82ad023" exitCode=0 Oct 06 07:32:42 crc kubenswrapper[4769]: I1006 07:32:42.568341 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76647d8c4d-8r7dp" event={"ID":"11689749-b27d-441d-b7d4-9406f215e064","Type":"ContainerDied","Data":"decc0a2fe76868204b241fea47efe54f52b45d981a45a678cd9bcef1b82ad023"} Oct 06 07:32:42 crc kubenswrapper[4769]: I1006 07:32:42.571276 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b48758c79-6xdwf" event={"ID":"7bf270e0-baab-4f7d-ae98-8e3776b1518d","Type":"ContainerStarted","Data":"cd36be77fa1f60f889f2c0b3b3f48eab6c1ecdb27500625e2eb017ff81e90a79"} Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.059690 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.163647 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wvms\" (UniqueName: \"kubernetes.io/projected/11689749-b27d-441d-b7d4-9406f215e064-kube-api-access-8wvms\") pod \"11689749-b27d-441d-b7d4-9406f215e064\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.163977 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-config-data-custom\") pod \"11689749-b27d-441d-b7d4-9406f215e064\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.164067 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-combined-ca-bundle\") pod \"11689749-b27d-441d-b7d4-9406f215e064\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.164800 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11689749-b27d-441d-b7d4-9406f215e064-logs\") pod \"11689749-b27d-441d-b7d4-9406f215e064\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.164838 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-config-data\") pod \"11689749-b27d-441d-b7d4-9406f215e064\" (UID: \"11689749-b27d-441d-b7d4-9406f215e064\") " Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.167912 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11689749-b27d-441d-b7d4-9406f215e064-logs" (OuterVolumeSpecName: "logs") pod "11689749-b27d-441d-b7d4-9406f215e064" (UID: "11689749-b27d-441d-b7d4-9406f215e064"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.172369 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "11689749-b27d-441d-b7d4-9406f215e064" (UID: "11689749-b27d-441d-b7d4-9406f215e064"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.172371 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11689749-b27d-441d-b7d4-9406f215e064-kube-api-access-8wvms" (OuterVolumeSpecName: "kube-api-access-8wvms") pod "11689749-b27d-441d-b7d4-9406f215e064" (UID: "11689749-b27d-441d-b7d4-9406f215e064"). InnerVolumeSpecName "kube-api-access-8wvms". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.202582 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11689749-b27d-441d-b7d4-9406f215e064" (UID: "11689749-b27d-441d-b7d4-9406f215e064"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.221693 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-config-data" (OuterVolumeSpecName: "config-data") pod "11689749-b27d-441d-b7d4-9406f215e064" (UID: "11689749-b27d-441d-b7d4-9406f215e064"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.268643 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wvms\" (UniqueName: \"kubernetes.io/projected/11689749-b27d-441d-b7d4-9406f215e064-kube-api-access-8wvms\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.268683 4769 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.268705 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.268717 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11689749-b27d-441d-b7d4-9406f215e064-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.268727 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11689749-b27d-441d-b7d4-9406f215e064-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.583297 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf","Type":"ContainerStarted","Data":"b7bf9b3db510679ed9d3eac0046f0225034f228b21aab6a4690e77cc5e3d2b5e"} Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.583483 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.583502 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="ceilometer-central-agent" containerID="cri-o://676ce3c05b5a0c3694f26e6491299a5f584a906c4661e6cb54453732db741a8c" gracePeriod=30 Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.583544 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="ceilometer-notification-agent" containerID="cri-o://47bcd566d240d501ca78f81c29c18b2c7d752a10a9d750c03f357a9f6ba11b20" gracePeriod=30 Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.583522 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="proxy-httpd" containerID="cri-o://b7bf9b3db510679ed9d3eac0046f0225034f228b21aab6a4690e77cc5e3d2b5e" gracePeriod=30 Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.583624 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="sg-core" containerID="cri-o://557a2cb62dfad90b655bae50c5be15db07d45329539b24cb2a0a0471433a4089" gracePeriod=30 Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.587585 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b48758c79-6xdwf" event={"ID":"7bf270e0-baab-4f7d-ae98-8e3776b1518d","Type":"ContainerStarted","Data":"adcade0aa01ab8fb73f04fcad5d32ef6426376796a3d2dfe54f330b51b120181"} Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.587756 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.587867 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b48758c79-6xdwf" event={"ID":"7bf270e0-baab-4f7d-ae98-8e3776b1518d","Type":"ContainerStarted","Data":"bd2b915705390ca2bee03cdbb0d20bf727b6a50cbcaf9d94aa042ef934fc61aa"} Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.610894 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.192360019 podStartE2EDuration="44.610877085s" podCreationTimestamp="2025-10-06 07:31:59 +0000 UTC" firstStartedPulling="2025-10-06 07:32:01.593632898 +0000 UTC m=+918.117914045" lastFinishedPulling="2025-10-06 07:32:43.012149964 +0000 UTC m=+959.536431111" observedRunningTime="2025-10-06 07:32:43.605391355 +0000 UTC m=+960.129672522" watchObservedRunningTime="2025-10-06 07:32:43.610877085 +0000 UTC m=+960.135158232" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.613250 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76647d8c4d-8r7dp" event={"ID":"11689749-b27d-441d-b7d4-9406f215e064","Type":"ContainerDied","Data":"60ae3b22c55f391f84d19c4ce7dd076a4a49276255643cb29ffee5eb2f5052ed"} Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.613307 4769 scope.go:117] "RemoveContainer" containerID="decc0a2fe76868204b241fea47efe54f52b45d981a45a678cd9bcef1b82ad023" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.613329 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76647d8c4d-8r7dp" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.635334 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b48758c79-6xdwf" podStartSLOduration=7.635314803 podStartE2EDuration="7.635314803s" podCreationTimestamp="2025-10-06 07:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:43.631997362 +0000 UTC m=+960.156278509" watchObservedRunningTime="2025-10-06 07:32:43.635314803 +0000 UTC m=+960.159595950" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.654412 4769 scope.go:117] "RemoveContainer" containerID="b25ea06e1f09d3dd3677d778a0daf08e6e90c6da8d696daada3c2547167dd0a5" Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.658591 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76647d8c4d-8r7dp"] Oct 06 07:32:43 crc kubenswrapper[4769]: I1006 07:32:43.664917 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-76647d8c4d-8r7dp"] Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.065609 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.123261 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d7577745f-sphwf"] Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.123756 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" podUID="4d13b502-7f7e-447b-a4bc-008681b34ee0" containerName="dnsmasq-dns" containerID="cri-o://4c81c0b1f8f70b17392b85053686fc53a02b844b623856fd48bd917147373d34" gracePeriod=10 Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.205321 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11689749-b27d-441d-b7d4-9406f215e064" path="/var/lib/kubelet/pods/11689749-b27d-441d-b7d4-9406f215e064/volumes" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.653904 4769 generic.go:334] "Generic (PLEG): container finished" podID="4d13b502-7f7e-447b-a4bc-008681b34ee0" containerID="4c81c0b1f8f70b17392b85053686fc53a02b844b623856fd48bd917147373d34" exitCode=0 Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.654276 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" event={"ID":"4d13b502-7f7e-447b-a4bc-008681b34ee0","Type":"ContainerDied","Data":"4c81c0b1f8f70b17392b85053686fc53a02b844b623856fd48bd917147373d34"} Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.654301 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" event={"ID":"4d13b502-7f7e-447b-a4bc-008681b34ee0","Type":"ContainerDied","Data":"af85802338dc8d04870bbed285bba3498831e879cfb8557f07e88e89fcfd9f30"} Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.654312 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af85802338dc8d04870bbed285bba3498831e879cfb8557f07e88e89fcfd9f30" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.674673 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.675540 4769 generic.go:334] "Generic (PLEG): container finished" podID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerID="b7bf9b3db510679ed9d3eac0046f0225034f228b21aab6a4690e77cc5e3d2b5e" exitCode=0 Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.675565 4769 generic.go:334] "Generic (PLEG): container finished" podID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerID="557a2cb62dfad90b655bae50c5be15db07d45329539b24cb2a0a0471433a4089" exitCode=2 Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.675573 4769 generic.go:334] "Generic (PLEG): container finished" podID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerID="47bcd566d240d501ca78f81c29c18b2c7d752a10a9d750c03f357a9f6ba11b20" exitCode=0 Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.675581 4769 generic.go:334] "Generic (PLEG): container finished" podID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerID="676ce3c05b5a0c3694f26e6491299a5f584a906c4661e6cb54453732db741a8c" exitCode=0 Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.675614 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf","Type":"ContainerDied","Data":"b7bf9b3db510679ed9d3eac0046f0225034f228b21aab6a4690e77cc5e3d2b5e"} Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.675634 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf","Type":"ContainerDied","Data":"557a2cb62dfad90b655bae50c5be15db07d45329539b24cb2a0a0471433a4089"} Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.675644 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf","Type":"ContainerDied","Data":"47bcd566d240d501ca78f81c29c18b2c7d752a10a9d750c03f357a9f6ba11b20"} Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.675653 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf","Type":"ContainerDied","Data":"676ce3c05b5a0c3694f26e6491299a5f584a906c4661e6cb54453732db741a8c"} Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.678535 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qtgml" event={"ID":"1dc18379-7117-430b-9d0f-65115eaedf51","Type":"ContainerStarted","Data":"544c825d3d584d3a785c9178b8c1f307caa89327e272758eae009f9dab7b1f94"} Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.707458 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-ovsdbserver-nb\") pod \"4d13b502-7f7e-447b-a4bc-008681b34ee0\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.707571 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-dns-svc\") pod \"4d13b502-7f7e-447b-a4bc-008681b34ee0\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.707609 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmgg7\" (UniqueName: \"kubernetes.io/projected/4d13b502-7f7e-447b-a4bc-008681b34ee0-kube-api-access-fmgg7\") pod \"4d13b502-7f7e-447b-a4bc-008681b34ee0\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.707725 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-config\") pod \"4d13b502-7f7e-447b-a4bc-008681b34ee0\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.707741 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-ovsdbserver-sb\") pod \"4d13b502-7f7e-447b-a4bc-008681b34ee0\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.708383 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-dns-swift-storage-0\") pod \"4d13b502-7f7e-447b-a4bc-008681b34ee0\" (UID: \"4d13b502-7f7e-447b-a4bc-008681b34ee0\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.708593 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qtgml" podStartSLOduration=3.9968780170000002 podStartE2EDuration="45.708582453s" podCreationTimestamp="2025-10-06 07:31:59 +0000 UTC" firstStartedPulling="2025-10-06 07:32:01.256055094 +0000 UTC m=+917.780336241" lastFinishedPulling="2025-10-06 07:32:42.96775953 +0000 UTC m=+959.492040677" observedRunningTime="2025-10-06 07:32:44.706537306 +0000 UTC m=+961.230818463" watchObservedRunningTime="2025-10-06 07:32:44.708582453 +0000 UTC m=+961.232863600" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.717736 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d13b502-7f7e-447b-a4bc-008681b34ee0-kube-api-access-fmgg7" (OuterVolumeSpecName: "kube-api-access-fmgg7") pod "4d13b502-7f7e-447b-a4bc-008681b34ee0" (UID: "4d13b502-7f7e-447b-a4bc-008681b34ee0"). InnerVolumeSpecName "kube-api-access-fmgg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.760630 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.792340 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4d13b502-7f7e-447b-a4bc-008681b34ee0" (UID: "4d13b502-7f7e-447b-a4bc-008681b34ee0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.810592 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql7d2\" (UniqueName: \"kubernetes.io/projected/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-kube-api-access-ql7d2\") pod \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.810732 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-combined-ca-bundle\") pod \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.810766 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-run-httpd\") pod \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.810818 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-log-httpd\") pod \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.810901 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-config-data\") pod \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.810946 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-sg-core-conf-yaml\") pod \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.811017 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-scripts\") pod \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\" (UID: \"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf\") " Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.811453 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.811472 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmgg7\" (UniqueName: \"kubernetes.io/projected/4d13b502-7f7e-447b-a4bc-008681b34ee0-kube-api-access-fmgg7\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.811512 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" (UID: "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.811547 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" (UID: "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.814492 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-config" (OuterVolumeSpecName: "config") pod "4d13b502-7f7e-447b-a4bc-008681b34ee0" (UID: "4d13b502-7f7e-447b-a4bc-008681b34ee0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.819874 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d13b502-7f7e-447b-a4bc-008681b34ee0" (UID: "4d13b502-7f7e-447b-a4bc-008681b34ee0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.821378 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-kube-api-access-ql7d2" (OuterVolumeSpecName: "kube-api-access-ql7d2") pod "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" (UID: "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf"). InnerVolumeSpecName "kube-api-access-ql7d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.824442 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-scripts" (OuterVolumeSpecName: "scripts") pod "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" (UID: "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.836364 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4d13b502-7f7e-447b-a4bc-008681b34ee0" (UID: "4d13b502-7f7e-447b-a4bc-008681b34ee0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.840303 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" (UID: "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.846760 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d13b502-7f7e-447b-a4bc-008681b34ee0" (UID: "4d13b502-7f7e-447b-a4bc-008681b34ee0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.879171 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" (UID: "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.905283 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-config-data" (OuterVolumeSpecName: "config-data") pod "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" (UID: "f1be3d03-c597-4ee9-a4cd-ee199f8aaecf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.913299 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.913324 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.913337 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql7d2\" (UniqueName: \"kubernetes.io/projected/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-kube-api-access-ql7d2\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.913347 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.913356 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.913366 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.913375 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.913383 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.913391 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d13b502-7f7e-447b-a4bc-008681b34ee0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.913398 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:44 crc kubenswrapper[4769]: I1006 07:32:44.913406 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.692541 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d7577745f-sphwf" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.692885 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1be3d03-c597-4ee9-a4cd-ee199f8aaecf","Type":"ContainerDied","Data":"0db5db2255520557b58747ac605160f1a01dd863ad38393603b60c50ac29d405"} Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.692952 4769 scope.go:117] "RemoveContainer" containerID="b7bf9b3db510679ed9d3eac0046f0225034f228b21aab6a4690e77cc5e3d2b5e" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.692855 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.732595 4769 scope.go:117] "RemoveContainer" containerID="557a2cb62dfad90b655bae50c5be15db07d45329539b24cb2a0a0471433a4089" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.735202 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d7577745f-sphwf"] Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.752638 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d7577745f-sphwf"] Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.756203 4769 scope.go:117] "RemoveContainer" containerID="47bcd566d240d501ca78f81c29c18b2c7d752a10a9d750c03f357a9f6ba11b20" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.760913 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.777609 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.787221 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:32:45 crc kubenswrapper[4769]: E1006 07:32:45.787870 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11689749-b27d-441d-b7d4-9406f215e064" containerName="barbican-api" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.787906 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="11689749-b27d-441d-b7d4-9406f215e064" containerName="barbican-api" Oct 06 07:32:45 crc kubenswrapper[4769]: E1006 07:32:45.787930 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="ceilometer-central-agent" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.787947 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="ceilometer-central-agent" Oct 06 07:32:45 crc kubenswrapper[4769]: E1006 07:32:45.787971 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="proxy-httpd" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.787987 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="proxy-httpd" Oct 06 07:32:45 crc kubenswrapper[4769]: E1006 07:32:45.788011 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d13b502-7f7e-447b-a4bc-008681b34ee0" containerName="init" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.788029 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d13b502-7f7e-447b-a4bc-008681b34ee0" containerName="init" Oct 06 07:32:45 crc kubenswrapper[4769]: E1006 07:32:45.788065 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="ceilometer-notification-agent" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.788080 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="ceilometer-notification-agent" Oct 06 07:32:45 crc kubenswrapper[4769]: E1006 07:32:45.788113 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d13b502-7f7e-447b-a4bc-008681b34ee0" containerName="dnsmasq-dns" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.788130 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d13b502-7f7e-447b-a4bc-008681b34ee0" containerName="dnsmasq-dns" Oct 06 07:32:45 crc kubenswrapper[4769]: E1006 07:32:45.788165 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11689749-b27d-441d-b7d4-9406f215e064" containerName="barbican-api-log" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.788183 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="11689749-b27d-441d-b7d4-9406f215e064" containerName="barbican-api-log" Oct 06 07:32:45 crc kubenswrapper[4769]: E1006 07:32:45.788210 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="sg-core" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.788224 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="sg-core" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.788757 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="proxy-httpd" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.788808 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="ceilometer-notification-agent" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.788832 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="11689749-b27d-441d-b7d4-9406f215e064" containerName="barbican-api-log" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.788873 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d13b502-7f7e-447b-a4bc-008681b34ee0" containerName="dnsmasq-dns" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.788899 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="11689749-b27d-441d-b7d4-9406f215e064" containerName="barbican-api" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.788922 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="sg-core" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.788965 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" containerName="ceilometer-central-agent" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.793725 4769 scope.go:117] "RemoveContainer" containerID="676ce3c05b5a0c3694f26e6491299a5f584a906c4661e6cb54453732db741a8c" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.794021 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.796276 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.799140 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.799476 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.835449 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-run-httpd\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.835538 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-config-data\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.835654 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-scripts\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.835720 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7fwp\" (UniqueName: \"kubernetes.io/projected/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-kube-api-access-s7fwp\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.835769 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.835820 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.835909 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-log-httpd\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.937353 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-scripts\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.937755 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7fwp\" (UniqueName: \"kubernetes.io/projected/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-kube-api-access-s7fwp\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.937788 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.937810 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.938283 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-log-httpd\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.938688 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-log-httpd\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.938315 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-run-httpd\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.938730 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-run-httpd\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.938788 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-config-data\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.942199 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.942751 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-config-data\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.943136 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-scripts\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.944330 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:45 crc kubenswrapper[4769]: I1006 07:32:45.968602 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7fwp\" (UniqueName: \"kubernetes.io/projected/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-kube-api-access-s7fwp\") pod \"ceilometer-0\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " pod="openstack/ceilometer-0" Oct 06 07:32:46 crc kubenswrapper[4769]: I1006 07:32:46.126220 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:32:46 crc kubenswrapper[4769]: I1006 07:32:46.185492 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d13b502-7f7e-447b-a4bc-008681b34ee0" path="/var/lib/kubelet/pods/4d13b502-7f7e-447b-a4bc-008681b34ee0/volumes" Oct 06 07:32:46 crc kubenswrapper[4769]: I1006 07:32:46.186890 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1be3d03-c597-4ee9-a4cd-ee199f8aaecf" path="/var/lib/kubelet/pods/f1be3d03-c597-4ee9-a4cd-ee199f8aaecf/volumes" Oct 06 07:32:46 crc kubenswrapper[4769]: I1006 07:32:46.629718 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:32:46 crc kubenswrapper[4769]: W1006 07:32:46.634521 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb93fa41b_4fbc_42f8_9ba6_68f6b56540f1.slice/crio-c7fe8fefd3b04cd976d79e7cbe3ed2fba69f0cad92886e67472d9f0a4d419ba2 WatchSource:0}: Error finding container c7fe8fefd3b04cd976d79e7cbe3ed2fba69f0cad92886e67472d9f0a4d419ba2: Status 404 returned error can't find the container with id c7fe8fefd3b04cd976d79e7cbe3ed2fba69f0cad92886e67472d9f0a4d419ba2 Oct 06 07:32:46 crc kubenswrapper[4769]: I1006 07:32:46.702253 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1","Type":"ContainerStarted","Data":"c7fe8fefd3b04cd976d79e7cbe3ed2fba69f0cad92886e67472d9f0a4d419ba2"} Oct 06 07:32:47 crc kubenswrapper[4769]: I1006 07:32:47.713896 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1","Type":"ContainerStarted","Data":"5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033"} Oct 06 07:32:47 crc kubenswrapper[4769]: I1006 07:32:47.714509 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1","Type":"ContainerStarted","Data":"d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf"} Oct 06 07:32:48 crc kubenswrapper[4769]: I1006 07:32:48.726318 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1","Type":"ContainerStarted","Data":"52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0"} Oct 06 07:32:49 crc kubenswrapper[4769]: I1006 07:32:49.735749 4769 generic.go:334] "Generic (PLEG): container finished" podID="1dc18379-7117-430b-9d0f-65115eaedf51" containerID="544c825d3d584d3a785c9178b8c1f307caa89327e272758eae009f9dab7b1f94" exitCode=0 Oct 06 07:32:49 crc kubenswrapper[4769]: I1006 07:32:49.736091 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qtgml" event={"ID":"1dc18379-7117-430b-9d0f-65115eaedf51","Type":"ContainerDied","Data":"544c825d3d584d3a785c9178b8c1f307caa89327e272758eae009f9dab7b1f94"} Oct 06 07:32:49 crc kubenswrapper[4769]: I1006 07:32:49.738812 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1","Type":"ContainerStarted","Data":"45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e"} Oct 06 07:32:49 crc kubenswrapper[4769]: I1006 07:32:49.739815 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 07:32:49 crc kubenswrapper[4769]: I1006 07:32:49.789668 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.011536525 podStartE2EDuration="4.789650864s" podCreationTimestamp="2025-10-06 07:32:45 +0000 UTC" firstStartedPulling="2025-10-06 07:32:46.637760491 +0000 UTC m=+963.162041628" lastFinishedPulling="2025-10-06 07:32:49.41587482 +0000 UTC m=+965.940155967" observedRunningTime="2025-10-06 07:32:49.781747228 +0000 UTC m=+966.306028375" watchObservedRunningTime="2025-10-06 07:32:49.789650864 +0000 UTC m=+966.313932011" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.147190 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.237696 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2vhg\" (UniqueName: \"kubernetes.io/projected/1dc18379-7117-430b-9d0f-65115eaedf51-kube-api-access-q2vhg\") pod \"1dc18379-7117-430b-9d0f-65115eaedf51\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.237786 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-config-data\") pod \"1dc18379-7117-430b-9d0f-65115eaedf51\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.237871 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-db-sync-config-data\") pod \"1dc18379-7117-430b-9d0f-65115eaedf51\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.237944 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-scripts\") pod \"1dc18379-7117-430b-9d0f-65115eaedf51\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.237975 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1dc18379-7117-430b-9d0f-65115eaedf51-etc-machine-id\") pod \"1dc18379-7117-430b-9d0f-65115eaedf51\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.238106 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-combined-ca-bundle\") pod \"1dc18379-7117-430b-9d0f-65115eaedf51\" (UID: \"1dc18379-7117-430b-9d0f-65115eaedf51\") " Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.238141 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dc18379-7117-430b-9d0f-65115eaedf51-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1dc18379-7117-430b-9d0f-65115eaedf51" (UID: "1dc18379-7117-430b-9d0f-65115eaedf51"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.238455 4769 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1dc18379-7117-430b-9d0f-65115eaedf51-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.243260 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dc18379-7117-430b-9d0f-65115eaedf51-kube-api-access-q2vhg" (OuterVolumeSpecName: "kube-api-access-q2vhg") pod "1dc18379-7117-430b-9d0f-65115eaedf51" (UID: "1dc18379-7117-430b-9d0f-65115eaedf51"). InnerVolumeSpecName "kube-api-access-q2vhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.243875 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-scripts" (OuterVolumeSpecName: "scripts") pod "1dc18379-7117-430b-9d0f-65115eaedf51" (UID: "1dc18379-7117-430b-9d0f-65115eaedf51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.257788 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1dc18379-7117-430b-9d0f-65115eaedf51" (UID: "1dc18379-7117-430b-9d0f-65115eaedf51"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.266286 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dc18379-7117-430b-9d0f-65115eaedf51" (UID: "1dc18379-7117-430b-9d0f-65115eaedf51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.287980 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-config-data" (OuterVolumeSpecName: "config-data") pod "1dc18379-7117-430b-9d0f-65115eaedf51" (UID: "1dc18379-7117-430b-9d0f-65115eaedf51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.339648 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2vhg\" (UniqueName: \"kubernetes.io/projected/1dc18379-7117-430b-9d0f-65115eaedf51-kube-api-access-q2vhg\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.339682 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.339694 4769 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.339702 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.339711 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dc18379-7117-430b-9d0f-65115eaedf51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.766343 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qtgml" Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.766849 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qtgml" event={"ID":"1dc18379-7117-430b-9d0f-65115eaedf51","Type":"ContainerDied","Data":"f8d24c07784d2a0f65592d4b065969397c809a1b072c1883947b1e6c92337f08"} Oct 06 07:32:51 crc kubenswrapper[4769]: I1006 07:32:51.767060 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8d24c07784d2a0f65592d4b065969397c809a1b072c1883947b1e6c92337f08" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.043867 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 07:32:52 crc kubenswrapper[4769]: E1006 07:32:52.044461 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dc18379-7117-430b-9d0f-65115eaedf51" containerName="cinder-db-sync" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.044537 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dc18379-7117-430b-9d0f-65115eaedf51" containerName="cinder-db-sync" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.044791 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dc18379-7117-430b-9d0f-65115eaedf51" containerName="cinder-db-sync" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.045711 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.055546 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hll9s" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.055721 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.058646 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.062283 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.068461 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.093346 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586c7c99fc-8l5l6"] Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.095274 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.115074 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586c7c99fc-8l5l6"] Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.155535 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.155625 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttx8h\" (UniqueName: \"kubernetes.io/projected/e5741afc-7573-4764-b296-d899141351cc-kube-api-access-ttx8h\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.155656 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-config\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.155678 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b446q\" (UniqueName: \"kubernetes.io/projected/5f55324f-ebdf-459f-aa92-abfeec6a6755-kube-api-access-b446q\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.155698 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.155733 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5741afc-7573-4764-b296-d899141351cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.155748 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-ovsdbserver-nb\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.155780 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-ovsdbserver-sb\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.155804 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.155820 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.155846 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-dns-swift-storage-0\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.155884 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-dns-svc\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.203946 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.208563 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.212519 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.219005 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257165 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-dns-svc\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257219 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhlfj\" (UniqueName: \"kubernetes.io/projected/b8f7698c-7da5-44c3-8d16-22c9739462e5-kube-api-access-zhlfj\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257272 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-scripts\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257305 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257397 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttx8h\" (UniqueName: \"kubernetes.io/projected/e5741afc-7573-4764-b296-d899141351cc-kube-api-access-ttx8h\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257455 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8f7698c-7da5-44c3-8d16-22c9739462e5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257504 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-config\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257544 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b446q\" (UniqueName: \"kubernetes.io/projected/5f55324f-ebdf-459f-aa92-abfeec6a6755-kube-api-access-b446q\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257558 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257597 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f7698c-7da5-44c3-8d16-22c9739462e5-logs\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257634 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5741afc-7573-4764-b296-d899141351cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-ovsdbserver-nb\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257743 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-ovsdbserver-sb\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257782 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-config-data\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257819 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257838 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257880 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-dns-swift-storage-0\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257938 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.257967 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-config-data-custom\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.258048 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5741afc-7573-4764-b296-d899141351cc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.258175 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-dns-svc\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.258819 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-ovsdbserver-nb\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.259548 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-config\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.259641 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-ovsdbserver-sb\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.259713 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-dns-swift-storage-0\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.264833 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-scripts\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.265010 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.266640 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-config-data\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.267938 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.274577 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttx8h\" (UniqueName: \"kubernetes.io/projected/e5741afc-7573-4764-b296-d899141351cc-kube-api-access-ttx8h\") pod \"cinder-scheduler-0\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.276478 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b446q\" (UniqueName: \"kubernetes.io/projected/5f55324f-ebdf-459f-aa92-abfeec6a6755-kube-api-access-b446q\") pod \"dnsmasq-dns-586c7c99fc-8l5l6\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.359552 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.359602 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-config-data-custom\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.359644 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhlfj\" (UniqueName: \"kubernetes.io/projected/b8f7698c-7da5-44c3-8d16-22c9739462e5-kube-api-access-zhlfj\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.359685 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-scripts\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.359729 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8f7698c-7da5-44c3-8d16-22c9739462e5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.359776 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f7698c-7da5-44c3-8d16-22c9739462e5-logs\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.359835 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-config-data\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.359912 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8f7698c-7da5-44c3-8d16-22c9739462e5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.360386 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f7698c-7da5-44c3-8d16-22c9739462e5-logs\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.363520 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-scripts\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.363553 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.363804 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-config-data-custom\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.364845 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-config-data\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.373532 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.392013 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhlfj\" (UniqueName: \"kubernetes.io/projected/b8f7698c-7da5-44c3-8d16-22c9739462e5-kube-api-access-zhlfj\") pod \"cinder-api-0\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.422232 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.524378 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.843575 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 07:32:52 crc kubenswrapper[4769]: W1006 07:32:52.847537 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5741afc_7573_4764_b296_d899141351cc.slice/crio-342cb6710267bb266c543eed60a3c13b07ac2c77b9c7c43a2a464b7ccb6d7766 WatchSource:0}: Error finding container 342cb6710267bb266c543eed60a3c13b07ac2c77b9c7c43a2a464b7ccb6d7766: Status 404 returned error can't find the container with id 342cb6710267bb266c543eed60a3c13b07ac2c77b9c7c43a2a464b7ccb6d7766 Oct 06 07:32:52 crc kubenswrapper[4769]: I1006 07:32:52.957298 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586c7c99fc-8l5l6"] Oct 06 07:32:52 crc kubenswrapper[4769]: W1006 07:32:52.958793 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f55324f_ebdf_459f_aa92_abfeec6a6755.slice/crio-d5cc7554548757e16a722243afb4b1528d93fed2f13a1f8c38bfac4b1d157dc2 WatchSource:0}: Error finding container d5cc7554548757e16a722243afb4b1528d93fed2f13a1f8c38bfac4b1d157dc2: Status 404 returned error can't find the container with id d5cc7554548757e16a722243afb4b1528d93fed2f13a1f8c38bfac4b1d157dc2 Oct 06 07:32:53 crc kubenswrapper[4769]: I1006 07:32:53.032461 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 07:32:53 crc kubenswrapper[4769]: I1006 07:32:53.782652 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8f7698c-7da5-44c3-8d16-22c9739462e5","Type":"ContainerStarted","Data":"d318a17857edbdc89e7cf1995c0d7de7a05130d3e68312fe8dbb55d3c612523d"} Oct 06 07:32:53 crc kubenswrapper[4769]: I1006 07:32:53.784175 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5741afc-7573-4764-b296-d899141351cc","Type":"ContainerStarted","Data":"342cb6710267bb266c543eed60a3c13b07ac2c77b9c7c43a2a464b7ccb6d7766"} Oct 06 07:32:53 crc kubenswrapper[4769]: I1006 07:32:53.785759 4769 generic.go:334] "Generic (PLEG): container finished" podID="5f55324f-ebdf-459f-aa92-abfeec6a6755" containerID="c656aaa7a1c0e1e55ddb3f763cc21f89cfbde3b1f458f5dfa18fcfc558db7cda" exitCode=0 Oct 06 07:32:53 crc kubenswrapper[4769]: I1006 07:32:53.785785 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" event={"ID":"5f55324f-ebdf-459f-aa92-abfeec6a6755","Type":"ContainerDied","Data":"c656aaa7a1c0e1e55ddb3f763cc21f89cfbde3b1f458f5dfa18fcfc558db7cda"} Oct 06 07:32:53 crc kubenswrapper[4769]: I1006 07:32:53.785799 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" event={"ID":"5f55324f-ebdf-459f-aa92-abfeec6a6755","Type":"ContainerStarted","Data":"d5cc7554548757e16a722243afb4b1528d93fed2f13a1f8c38bfac4b1d157dc2"} Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.217944 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.793921 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8f7698c-7da5-44c3-8d16-22c9739462e5","Type":"ContainerStarted","Data":"0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765"} Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.794221 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8f7698c-7da5-44c3-8d16-22c9739462e5","Type":"ContainerStarted","Data":"7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a"} Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.794335 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b8f7698c-7da5-44c3-8d16-22c9739462e5" containerName="cinder-api-log" containerID="cri-o://7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a" gracePeriod=30 Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.794584 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.794796 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b8f7698c-7da5-44c3-8d16-22c9739462e5" containerName="cinder-api" containerID="cri-o://0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765" gracePeriod=30 Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.797162 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5741afc-7573-4764-b296-d899141351cc","Type":"ContainerStarted","Data":"8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393"} Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.797202 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5741afc-7573-4764-b296-d899141351cc","Type":"ContainerStarted","Data":"7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927"} Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.807120 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" event={"ID":"5f55324f-ebdf-459f-aa92-abfeec6a6755","Type":"ContainerStarted","Data":"b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a"} Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.807445 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.832047 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.832032118 podStartE2EDuration="2.832032118s" podCreationTimestamp="2025-10-06 07:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:54.821576923 +0000 UTC m=+971.345858070" watchObservedRunningTime="2025-10-06 07:32:54.832032118 +0000 UTC m=+971.356313265" Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.842839 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.531919838 podStartE2EDuration="2.842821873s" podCreationTimestamp="2025-10-06 07:32:52 +0000 UTC" firstStartedPulling="2025-10-06 07:32:52.851694172 +0000 UTC m=+969.375975319" lastFinishedPulling="2025-10-06 07:32:53.162596207 +0000 UTC m=+969.686877354" observedRunningTime="2025-10-06 07:32:54.840081628 +0000 UTC m=+971.364362775" watchObservedRunningTime="2025-10-06 07:32:54.842821873 +0000 UTC m=+971.367103020" Oct 06 07:32:54 crc kubenswrapper[4769]: I1006 07:32:54.862154 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" podStartSLOduration=2.862130891 podStartE2EDuration="2.862130891s" podCreationTimestamp="2025-10-06 07:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:32:54.857900336 +0000 UTC m=+971.382181483" watchObservedRunningTime="2025-10-06 07:32:54.862130891 +0000 UTC m=+971.386412038" Oct 06 07:32:55 crc kubenswrapper[4769]: I1006 07:32:55.817094 4769 generic.go:334] "Generic (PLEG): container finished" podID="b8f7698c-7da5-44c3-8d16-22c9739462e5" containerID="7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a" exitCode=143 Oct 06 07:32:55 crc kubenswrapper[4769]: I1006 07:32:55.817201 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8f7698c-7da5-44c3-8d16-22c9739462e5","Type":"ContainerDied","Data":"7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a"} Oct 06 07:32:57 crc kubenswrapper[4769]: I1006 07:32:57.374257 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 06 07:32:58 crc kubenswrapper[4769]: I1006 07:32:58.731832 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:32:58 crc kubenswrapper[4769]: I1006 07:32:58.732224 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9b886c44b-zxp8t" Oct 06 07:33:00 crc kubenswrapper[4769]: I1006 07:33:00.034386 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-655c85cd69-rkn7x" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.620740 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.622371 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.625053 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.625417 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-mvgv8" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.626860 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.632860 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.675039 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2831720-9d97-4e47-822e-4971e401591c-openstack-config\") pod \"openstackclient\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.675168 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxjlz\" (UniqueName: \"kubernetes.io/projected/a2831720-9d97-4e47-822e-4971e401591c-kube-api-access-dxjlz\") pod \"openstackclient\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.675411 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2831720-9d97-4e47-822e-4971e401591c-openstack-config-secret\") pod \"openstackclient\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.675522 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2831720-9d97-4e47-822e-4971e401591c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.777397 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2831720-9d97-4e47-822e-4971e401591c-openstack-config-secret\") pod \"openstackclient\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.777527 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2831720-9d97-4e47-822e-4971e401591c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.777697 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2831720-9d97-4e47-822e-4971e401591c-openstack-config\") pod \"openstackclient\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.777733 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxjlz\" (UniqueName: \"kubernetes.io/projected/a2831720-9d97-4e47-822e-4971e401591c-kube-api-access-dxjlz\") pod \"openstackclient\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.778960 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2831720-9d97-4e47-822e-4971e401591c-openstack-config\") pod \"openstackclient\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.786813 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2831720-9d97-4e47-822e-4971e401591c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.790818 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2831720-9d97-4e47-822e-4971e401591c-openstack-config-secret\") pod \"openstackclient\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.799625 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxjlz\" (UniqueName: \"kubernetes.io/projected/a2831720-9d97-4e47-822e-4971e401591c-kube-api-access-dxjlz\") pod \"openstackclient\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.908400 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.909350 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.920735 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.993218 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 06 07:33:01 crc kubenswrapper[4769]: I1006 07:33:01.994865 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.008875 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.090693 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f346861-ee62-493c-82bd-2ea7fa7347e6-openstack-config\") pod \"openstackclient\" (UID: \"6f346861-ee62-493c-82bd-2ea7fa7347e6\") " pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: E1006 07:33:02.090924 4769 log.go:32] "RunPodSandbox from runtime service failed" err=< Oct 06 07:33:02 crc kubenswrapper[4769]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_a2831720-9d97-4e47-822e-4971e401591c_0(94a3b8357485468cc9efe0886b30a8d2f5afb21fb6da62413d3dc30ba80a5027): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"94a3b8357485468cc9efe0886b30a8d2f5afb21fb6da62413d3dc30ba80a5027" Netns:"/var/run/netns/c3da7125-53f1-42ff-9ee9-a5f223c73c68" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=94a3b8357485468cc9efe0886b30a8d2f5afb21fb6da62413d3dc30ba80a5027;K8S_POD_UID=a2831720-9d97-4e47-822e-4971e401591c" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/a2831720-9d97-4e47-822e-4971e401591c]: expected pod UID "a2831720-9d97-4e47-822e-4971e401591c" but got "6f346861-ee62-493c-82bd-2ea7fa7347e6" from Kube API Oct 06 07:33:02 crc kubenswrapper[4769]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Oct 06 07:33:02 crc kubenswrapper[4769]: > Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.090958 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxqmv\" (UniqueName: \"kubernetes.io/projected/6f346861-ee62-493c-82bd-2ea7fa7347e6-kube-api-access-bxqmv\") pod \"openstackclient\" (UID: \"6f346861-ee62-493c-82bd-2ea7fa7347e6\") " pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.091003 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f346861-ee62-493c-82bd-2ea7fa7347e6-openstack-config-secret\") pod \"openstackclient\" (UID: \"6f346861-ee62-493c-82bd-2ea7fa7347e6\") " pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: E1006 07:33:02.091007 4769 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Oct 06 07:33:02 crc kubenswrapper[4769]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_a2831720-9d97-4e47-822e-4971e401591c_0(94a3b8357485468cc9efe0886b30a8d2f5afb21fb6da62413d3dc30ba80a5027): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"94a3b8357485468cc9efe0886b30a8d2f5afb21fb6da62413d3dc30ba80a5027" Netns:"/var/run/netns/c3da7125-53f1-42ff-9ee9-a5f223c73c68" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=94a3b8357485468cc9efe0886b30a8d2f5afb21fb6da62413d3dc30ba80a5027;K8S_POD_UID=a2831720-9d97-4e47-822e-4971e401591c" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/a2831720-9d97-4e47-822e-4971e401591c]: expected pod UID "a2831720-9d97-4e47-822e-4971e401591c" but got "6f346861-ee62-493c-82bd-2ea7fa7347e6" from Kube API Oct 06 07:33:02 crc kubenswrapper[4769]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Oct 06 07:33:02 crc kubenswrapper[4769]: > pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.091029 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f346861-ee62-493c-82bd-2ea7fa7347e6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6f346861-ee62-493c-82bd-2ea7fa7347e6\") " pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.192923 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f346861-ee62-493c-82bd-2ea7fa7347e6-openstack-config\") pod \"openstackclient\" (UID: \"6f346861-ee62-493c-82bd-2ea7fa7347e6\") " pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.192987 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxqmv\" (UniqueName: \"kubernetes.io/projected/6f346861-ee62-493c-82bd-2ea7fa7347e6-kube-api-access-bxqmv\") pod \"openstackclient\" (UID: \"6f346861-ee62-493c-82bd-2ea7fa7347e6\") " pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.193027 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f346861-ee62-493c-82bd-2ea7fa7347e6-openstack-config-secret\") pod \"openstackclient\" (UID: \"6f346861-ee62-493c-82bd-2ea7fa7347e6\") " pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.193054 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f346861-ee62-493c-82bd-2ea7fa7347e6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6f346861-ee62-493c-82bd-2ea7fa7347e6\") " pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.194554 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6f346861-ee62-493c-82bd-2ea7fa7347e6-openstack-config\") pod \"openstackclient\" (UID: \"6f346861-ee62-493c-82bd-2ea7fa7347e6\") " pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.198690 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f346861-ee62-493c-82bd-2ea7fa7347e6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6f346861-ee62-493c-82bd-2ea7fa7347e6\") " pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.201145 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6f346861-ee62-493c-82bd-2ea7fa7347e6-openstack-config-secret\") pod \"openstackclient\" (UID: \"6f346861-ee62-493c-82bd-2ea7fa7347e6\") " pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.212767 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxqmv\" (UniqueName: \"kubernetes.io/projected/6f346861-ee62-493c-82bd-2ea7fa7347e6-kube-api-access-bxqmv\") pod \"openstackclient\" (UID: \"6f346861-ee62-493c-82bd-2ea7fa7347e6\") " pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.375939 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.423605 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.534398 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cf986bc8f-wzfq2"] Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.534825 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" podUID="29620dea-acc5-4992-8a71-c4254e7b8a22" containerName="dnsmasq-dns" containerID="cri-o://f92720d79c86215430b3f74500bb16184a254ebbcd84250107d554e751380ea0" gracePeriod=10 Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.668372 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.741626 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.901888 4769 generic.go:334] "Generic (PLEG): container finished" podID="29620dea-acc5-4992-8a71-c4254e7b8a22" containerID="f92720d79c86215430b3f74500bb16184a254ebbcd84250107d554e751380ea0" exitCode=0 Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.901967 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.902589 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" event={"ID":"29620dea-acc5-4992-8a71-c4254e7b8a22","Type":"ContainerDied","Data":"f92720d79c86215430b3f74500bb16184a254ebbcd84250107d554e751380ea0"} Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.902781 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e5741afc-7573-4764-b296-d899141351cc" containerName="cinder-scheduler" containerID="cri-o://7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927" gracePeriod=30 Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.903236 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e5741afc-7573-4764-b296-d899141351cc" containerName="probe" containerID="cri-o://8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393" gracePeriod=30 Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.915224 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 07:33:02 crc kubenswrapper[4769]: I1006 07:33:02.918148 4769 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="a2831720-9d97-4e47-822e-4971e401591c" podUID="6f346861-ee62-493c-82bd-2ea7fa7347e6" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.005403 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxjlz\" (UniqueName: \"kubernetes.io/projected/a2831720-9d97-4e47-822e-4971e401591c-kube-api-access-dxjlz\") pod \"a2831720-9d97-4e47-822e-4971e401591c\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.005546 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2831720-9d97-4e47-822e-4971e401591c-openstack-config\") pod \"a2831720-9d97-4e47-822e-4971e401591c\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.005595 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2831720-9d97-4e47-822e-4971e401591c-combined-ca-bundle\") pod \"a2831720-9d97-4e47-822e-4971e401591c\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.005783 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2831720-9d97-4e47-822e-4971e401591c-openstack-config-secret\") pod \"a2831720-9d97-4e47-822e-4971e401591c\" (UID: \"a2831720-9d97-4e47-822e-4971e401591c\") " Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.007298 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2831720-9d97-4e47-822e-4971e401591c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a2831720-9d97-4e47-822e-4971e401591c" (UID: "a2831720-9d97-4e47-822e-4971e401591c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.011810 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2831720-9d97-4e47-822e-4971e401591c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a2831720-9d97-4e47-822e-4971e401591c" (UID: "a2831720-9d97-4e47-822e-4971e401591c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.014535 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2831720-9d97-4e47-822e-4971e401591c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2831720-9d97-4e47-822e-4971e401591c" (UID: "a2831720-9d97-4e47-822e-4971e401591c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.015879 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2831720-9d97-4e47-822e-4971e401591c-kube-api-access-dxjlz" (OuterVolumeSpecName: "kube-api-access-dxjlz") pod "a2831720-9d97-4e47-822e-4971e401591c" (UID: "a2831720-9d97-4e47-822e-4971e401591c"). InnerVolumeSpecName "kube-api-access-dxjlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.057383 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.108109 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxjlz\" (UniqueName: \"kubernetes.io/projected/a2831720-9d97-4e47-822e-4971e401591c-kube-api-access-dxjlz\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.108148 4769 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2831720-9d97-4e47-822e-4971e401591c-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.108161 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2831720-9d97-4e47-822e-4971e401591c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.108174 4769 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2831720-9d97-4e47-822e-4971e401591c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.160583 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.209625 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hsb4\" (UniqueName: \"kubernetes.io/projected/29620dea-acc5-4992-8a71-c4254e7b8a22-kube-api-access-8hsb4\") pod \"29620dea-acc5-4992-8a71-c4254e7b8a22\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.209736 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-ovsdbserver-nb\") pod \"29620dea-acc5-4992-8a71-c4254e7b8a22\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.209798 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-dns-swift-storage-0\") pod \"29620dea-acc5-4992-8a71-c4254e7b8a22\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.209889 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-dns-svc\") pod \"29620dea-acc5-4992-8a71-c4254e7b8a22\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.209959 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-ovsdbserver-sb\") pod \"29620dea-acc5-4992-8a71-c4254e7b8a22\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.210192 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-config\") pod \"29620dea-acc5-4992-8a71-c4254e7b8a22\" (UID: \"29620dea-acc5-4992-8a71-c4254e7b8a22\") " Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.248165 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29620dea-acc5-4992-8a71-c4254e7b8a22-kube-api-access-8hsb4" (OuterVolumeSpecName: "kube-api-access-8hsb4") pod "29620dea-acc5-4992-8a71-c4254e7b8a22" (UID: "29620dea-acc5-4992-8a71-c4254e7b8a22"). InnerVolumeSpecName "kube-api-access-8hsb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.268708 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-config" (OuterVolumeSpecName: "config") pod "29620dea-acc5-4992-8a71-c4254e7b8a22" (UID: "29620dea-acc5-4992-8a71-c4254e7b8a22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.288249 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "29620dea-acc5-4992-8a71-c4254e7b8a22" (UID: "29620dea-acc5-4992-8a71-c4254e7b8a22"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.297151 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "29620dea-acc5-4992-8a71-c4254e7b8a22" (UID: "29620dea-acc5-4992-8a71-c4254e7b8a22"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.298049 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "29620dea-acc5-4992-8a71-c4254e7b8a22" (UID: "29620dea-acc5-4992-8a71-c4254e7b8a22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.312096 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.312135 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.312147 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.312156 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hsb4\" (UniqueName: \"kubernetes.io/projected/29620dea-acc5-4992-8a71-c4254e7b8a22-kube-api-access-8hsb4\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.312166 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.329962 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "29620dea-acc5-4992-8a71-c4254e7b8a22" (UID: "29620dea-acc5-4992-8a71-c4254e7b8a22"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.413911 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/29620dea-acc5-4992-8a71-c4254e7b8a22-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.922326 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" event={"ID":"29620dea-acc5-4992-8a71-c4254e7b8a22","Type":"ContainerDied","Data":"b42de48ff95e1296efaa0a842bd0609a5703218d10cccb5112047b6b68fbac0e"} Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.922719 4769 scope.go:117] "RemoveContainer" containerID="f92720d79c86215430b3f74500bb16184a254ebbcd84250107d554e751380ea0" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.922892 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf986bc8f-wzfq2" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.927604 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6f346861-ee62-493c-82bd-2ea7fa7347e6","Type":"ContainerStarted","Data":"d0633731406ef2cafe149d29cb0fc3c4fc3f475edb3a90590b32ec593713b5a4"} Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.932976 4769 generic.go:334] "Generic (PLEG): container finished" podID="e5741afc-7573-4764-b296-d899141351cc" containerID="8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393" exitCode=0 Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.933008 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5741afc-7573-4764-b296-d899141351cc","Type":"ContainerDied","Data":"8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393"} Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.933057 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.951292 4769 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="a2831720-9d97-4e47-822e-4971e401591c" podUID="6f346861-ee62-493c-82bd-2ea7fa7347e6" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.957124 4769 scope.go:117] "RemoveContainer" containerID="fdc99787210a6ba7f502855de74fc34d1f37fe12e0e9521b573da4dbc042876f" Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.957966 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cf986bc8f-wzfq2"] Oct 06 07:33:03 crc kubenswrapper[4769]: I1006 07:33:03.967275 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cf986bc8f-wzfq2"] Oct 06 07:33:04 crc kubenswrapper[4769]: I1006 07:33:04.180109 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29620dea-acc5-4992-8a71-c4254e7b8a22" path="/var/lib/kubelet/pods/29620dea-acc5-4992-8a71-c4254e7b8a22/volumes" Oct 06 07:33:04 crc kubenswrapper[4769]: I1006 07:33:04.180725 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2831720-9d97-4e47-822e-4971e401591c" path="/var/lib/kubelet/pods/a2831720-9d97-4e47-822e-4971e401591c/volumes" Oct 06 07:33:04 crc kubenswrapper[4769]: I1006 07:33:04.210124 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:33:04 crc kubenswrapper[4769]: I1006 07:33:04.900595 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.441486 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.591188 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-config-data-custom\") pod \"e5741afc-7573-4764-b296-d899141351cc\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.591479 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttx8h\" (UniqueName: \"kubernetes.io/projected/e5741afc-7573-4764-b296-d899141351cc-kube-api-access-ttx8h\") pod \"e5741afc-7573-4764-b296-d899141351cc\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.591552 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-config-data\") pod \"e5741afc-7573-4764-b296-d899141351cc\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.591600 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-scripts\") pod \"e5741afc-7573-4764-b296-d899141351cc\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.591618 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5741afc-7573-4764-b296-d899141351cc-etc-machine-id\") pod \"e5741afc-7573-4764-b296-d899141351cc\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.591674 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-combined-ca-bundle\") pod \"e5741afc-7573-4764-b296-d899141351cc\" (UID: \"e5741afc-7573-4764-b296-d899141351cc\") " Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.593723 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e5741afc-7573-4764-b296-d899141351cc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e5741afc-7573-4764-b296-d899141351cc" (UID: "e5741afc-7573-4764-b296-d899141351cc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.597037 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5741afc-7573-4764-b296-d899141351cc-kube-api-access-ttx8h" (OuterVolumeSpecName: "kube-api-access-ttx8h") pod "e5741afc-7573-4764-b296-d899141351cc" (UID: "e5741afc-7573-4764-b296-d899141351cc"). InnerVolumeSpecName "kube-api-access-ttx8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.597717 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5741afc-7573-4764-b296-d899141351cc" (UID: "e5741afc-7573-4764-b296-d899141351cc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.598674 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-scripts" (OuterVolumeSpecName: "scripts") pod "e5741afc-7573-4764-b296-d899141351cc" (UID: "e5741afc-7573-4764-b296-d899141351cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.651036 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5741afc-7573-4764-b296-d899141351cc" (UID: "e5741afc-7573-4764-b296-d899141351cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.694131 4769 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.694168 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttx8h\" (UniqueName: \"kubernetes.io/projected/e5741afc-7573-4764-b296-d899141351cc-kube-api-access-ttx8h\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.694179 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.694187 4769 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e5741afc-7573-4764-b296-d899141351cc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.694195 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.709670 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7b475fd569-c56qk"] Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.709833 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-config-data" (OuterVolumeSpecName: "config-data") pod "e5741afc-7573-4764-b296-d899141351cc" (UID: "e5741afc-7573-4764-b296-d899141351cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:05 crc kubenswrapper[4769]: E1006 07:33:05.710066 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5741afc-7573-4764-b296-d899141351cc" containerName="cinder-scheduler" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.710082 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5741afc-7573-4764-b296-d899141351cc" containerName="cinder-scheduler" Oct 06 07:33:05 crc kubenswrapper[4769]: E1006 07:33:05.710096 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29620dea-acc5-4992-8a71-c4254e7b8a22" containerName="init" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.710102 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="29620dea-acc5-4992-8a71-c4254e7b8a22" containerName="init" Oct 06 07:33:05 crc kubenswrapper[4769]: E1006 07:33:05.710137 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29620dea-acc5-4992-8a71-c4254e7b8a22" containerName="dnsmasq-dns" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.710143 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="29620dea-acc5-4992-8a71-c4254e7b8a22" containerName="dnsmasq-dns" Oct 06 07:33:05 crc kubenswrapper[4769]: E1006 07:33:05.710154 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5741afc-7573-4764-b296-d899141351cc" containerName="probe" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.710161 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5741afc-7573-4764-b296-d899141351cc" containerName="probe" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.710313 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5741afc-7573-4764-b296-d899141351cc" containerName="probe" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.710324 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5741afc-7573-4764-b296-d899141351cc" containerName="cinder-scheduler" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.710336 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="29620dea-acc5-4992-8a71-c4254e7b8a22" containerName="dnsmasq-dns" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.711220 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.715121 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.715256 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.715472 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.734540 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7b475fd569-c56qk"] Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.795581 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r522\" (UniqueName: \"kubernetes.io/projected/ef915e60-c2fd-4336-84f3-62b2cfa713a9-kube-api-access-5r522\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.795620 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef915e60-c2fd-4336-84f3-62b2cfa713a9-internal-tls-certs\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.796019 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef915e60-c2fd-4336-84f3-62b2cfa713a9-config-data\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.796093 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef915e60-c2fd-4336-84f3-62b2cfa713a9-public-tls-certs\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.796116 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef915e60-c2fd-4336-84f3-62b2cfa713a9-run-httpd\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.796193 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef915e60-c2fd-4336-84f3-62b2cfa713a9-combined-ca-bundle\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.796215 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef915e60-c2fd-4336-84f3-62b2cfa713a9-etc-swift\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.796271 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef915e60-c2fd-4336-84f3-62b2cfa713a9-log-httpd\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.796360 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5741afc-7573-4764-b296-d899141351cc-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.898088 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef915e60-c2fd-4336-84f3-62b2cfa713a9-log-httpd\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.898197 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r522\" (UniqueName: \"kubernetes.io/projected/ef915e60-c2fd-4336-84f3-62b2cfa713a9-kube-api-access-5r522\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.898225 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef915e60-c2fd-4336-84f3-62b2cfa713a9-config-data\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.898242 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef915e60-c2fd-4336-84f3-62b2cfa713a9-internal-tls-certs\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.898279 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef915e60-c2fd-4336-84f3-62b2cfa713a9-public-tls-certs\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.898302 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef915e60-c2fd-4336-84f3-62b2cfa713a9-run-httpd\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.898365 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef915e60-c2fd-4336-84f3-62b2cfa713a9-combined-ca-bundle\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.898386 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef915e60-c2fd-4336-84f3-62b2cfa713a9-etc-swift\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.899059 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef915e60-c2fd-4336-84f3-62b2cfa713a9-log-httpd\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.900094 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef915e60-c2fd-4336-84f3-62b2cfa713a9-run-httpd\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.902572 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ef915e60-c2fd-4336-84f3-62b2cfa713a9-etc-swift\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.906787 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef915e60-c2fd-4336-84f3-62b2cfa713a9-config-data\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.908469 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef915e60-c2fd-4336-84f3-62b2cfa713a9-combined-ca-bundle\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.908882 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef915e60-c2fd-4336-84f3-62b2cfa713a9-internal-tls-certs\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.909753 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef915e60-c2fd-4336-84f3-62b2cfa713a9-public-tls-certs\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.914799 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r522\" (UniqueName: \"kubernetes.io/projected/ef915e60-c2fd-4336-84f3-62b2cfa713a9-kube-api-access-5r522\") pod \"swift-proxy-7b475fd569-c56qk\" (UID: \"ef915e60-c2fd-4336-84f3-62b2cfa713a9\") " pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.968815 4769 generic.go:334] "Generic (PLEG): container finished" podID="e5741afc-7573-4764-b296-d899141351cc" containerID="7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927" exitCode=0 Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.968862 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5741afc-7573-4764-b296-d899141351cc","Type":"ContainerDied","Data":"7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927"} Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.968901 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.968923 4769 scope.go:117] "RemoveContainer" containerID="8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393" Oct 06 07:33:05 crc kubenswrapper[4769]: I1006 07:33:05.968910 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e5741afc-7573-4764-b296-d899141351cc","Type":"ContainerDied","Data":"342cb6710267bb266c543eed60a3c13b07ac2c77b9c7c43a2a464b7ccb6d7766"} Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.007936 4769 scope.go:117] "RemoveContainer" containerID="7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.027806 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.030629 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.037460 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.046943 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.049033 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.051366 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.076609 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.110519 4769 scope.go:117] "RemoveContainer" containerID="8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393" Oct 06 07:33:06 crc kubenswrapper[4769]: E1006 07:33:06.113061 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393\": container with ID starting with 8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393 not found: ID does not exist" containerID="8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.113121 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393"} err="failed to get container status \"8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393\": rpc error: code = NotFound desc = could not find container \"8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393\": container with ID starting with 8ee8714f8034fe3a8182925cc4c2baaf266bbd142c6598c5c3f6c442186c9393 not found: ID does not exist" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.113157 4769 scope.go:117] "RemoveContainer" containerID="7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927" Oct 06 07:33:06 crc kubenswrapper[4769]: E1006 07:33:06.113640 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927\": container with ID starting with 7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927 not found: ID does not exist" containerID="7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.113677 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927"} err="failed to get container status \"7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927\": rpc error: code = NotFound desc = could not find container \"7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927\": container with ID starting with 7e046fb8bf214383d774fe3563a645be49b93c944f6ddc7c209a3f4380535927 not found: ID does not exist" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.181811 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5741afc-7573-4764-b296-d899141351cc" path="/var/lib/kubelet/pods/e5741afc-7573-4764-b296-d899141351cc/volumes" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.206052 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/602f4ddb-1ad0-440b-b402-0179dd5604b3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.206116 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f4ddb-1ad0-440b-b402-0179dd5604b3-config-data\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.206141 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f4ddb-1ad0-440b-b402-0179dd5604b3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.206216 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8vcs\" (UniqueName: \"kubernetes.io/projected/602f4ddb-1ad0-440b-b402-0179dd5604b3-kube-api-access-m8vcs\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.206242 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/602f4ddb-1ad0-440b-b402-0179dd5604b3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.206270 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f4ddb-1ad0-440b-b402-0179dd5604b3-scripts\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.307591 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/602f4ddb-1ad0-440b-b402-0179dd5604b3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.307681 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f4ddb-1ad0-440b-b402-0179dd5604b3-config-data\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.307721 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f4ddb-1ad0-440b-b402-0179dd5604b3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.307762 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8vcs\" (UniqueName: \"kubernetes.io/projected/602f4ddb-1ad0-440b-b402-0179dd5604b3-kube-api-access-m8vcs\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.307807 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/602f4ddb-1ad0-440b-b402-0179dd5604b3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.307846 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f4ddb-1ad0-440b-b402-0179dd5604b3-scripts\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.308964 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/602f4ddb-1ad0-440b-b402-0179dd5604b3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.314899 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f4ddb-1ad0-440b-b402-0179dd5604b3-scripts\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.318898 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/602f4ddb-1ad0-440b-b402-0179dd5604b3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.320708 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f4ddb-1ad0-440b-b402-0179dd5604b3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.332975 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f4ddb-1ad0-440b-b402-0179dd5604b3-config-data\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.335376 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8vcs\" (UniqueName: \"kubernetes.io/projected/602f4ddb-1ad0-440b-b402-0179dd5604b3-kube-api-access-m8vcs\") pod \"cinder-scheduler-0\" (UID: \"602f4ddb-1ad0-440b-b402-0179dd5604b3\") " pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.389718 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.415396 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.415670 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="ceilometer-central-agent" containerID="cri-o://d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf" gracePeriod=30 Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.416342 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="proxy-httpd" containerID="cri-o://45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e" gracePeriod=30 Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.416405 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="sg-core" containerID="cri-o://52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0" gracePeriod=30 Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.416452 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="ceilometer-notification-agent" containerID="cri-o://5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033" gracePeriod=30 Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.426052 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.629627 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7b475fd569-c56qk"] Oct 06 07:33:06 crc kubenswrapper[4769]: W1006 07:33:06.633013 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef915e60_c2fd_4336_84f3_62b2cfa713a9.slice/crio-a198efba4d693d5f21ef175ac76fdbee16489c121255f98978b5fb1a27c6254b WatchSource:0}: Error finding container a198efba4d693d5f21ef175ac76fdbee16489c121255f98978b5fb1a27c6254b: Status 404 returned error can't find the container with id a198efba4d693d5f21ef175ac76fdbee16489c121255f98978b5fb1a27c6254b Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.825077 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5b48758c79-6xdwf" Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.872312 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5dd985fb44-8mpq5"] Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.877998 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5dd985fb44-8mpq5" podUID="7faa379c-e24f-4820-87d8-4e94e641f298" containerName="neutron-api" containerID="cri-o://e0568f4d060c057d1c843efbe83d0b97496a11e921085f3b85dde6fe6f7c7075" gracePeriod=30 Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.878087 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5dd985fb44-8mpq5" podUID="7faa379c-e24f-4820-87d8-4e94e641f298" containerName="neutron-httpd" containerID="cri-o://8324ad13c1a3c4c51ce4233012ba5c7a5269d519deacff8bea8681abf29fb22e" gracePeriod=30 Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.892682 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.995746 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1","Type":"ContainerDied","Data":"45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e"} Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.995664 4769 generic.go:334] "Generic (PLEG): container finished" podID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerID="45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e" exitCode=0 Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.995851 4769 generic.go:334] "Generic (PLEG): container finished" podID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerID="52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0" exitCode=2 Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.995858 4769 generic.go:334] "Generic (PLEG): container finished" podID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerID="d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf" exitCode=0 Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.995946 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1","Type":"ContainerDied","Data":"52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0"} Oct 06 07:33:06 crc kubenswrapper[4769]: I1006 07:33:06.995983 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1","Type":"ContainerDied","Data":"d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf"} Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.017891 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b475fd569-c56qk" event={"ID":"ef915e60-c2fd-4336-84f3-62b2cfa713a9","Type":"ContainerStarted","Data":"0f1ad1dc2d45355c5f776f05846519a9172a2687a2bfc426615aaddff72598d7"} Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.017991 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b475fd569-c56qk" event={"ID":"ef915e60-c2fd-4336-84f3-62b2cfa713a9","Type":"ContainerStarted","Data":"a198efba4d693d5f21ef175ac76fdbee16489c121255f98978b5fb1a27c6254b"} Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.024007 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"602f4ddb-1ad0-440b-b402-0179dd5604b3","Type":"ContainerStarted","Data":"d7be7ea41c0bcc2fe34ce1de196e2ed7e06f3a9cdd30526821d54028e4c785d8"} Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.617349 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.741917 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-log-httpd\") pod \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.742235 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-config-data\") pod \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.742299 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-run-httpd\") pod \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.742341 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-combined-ca-bundle\") pod \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.742372 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-scripts\") pod \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.742459 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-sg-core-conf-yaml\") pod \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.742497 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7fwp\" (UniqueName: \"kubernetes.io/projected/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-kube-api-access-s7fwp\") pod \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\" (UID: \"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1\") " Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.743186 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" (UID: "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.743747 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" (UID: "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.747916 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-scripts" (OuterVolumeSpecName: "scripts") pod "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" (UID: "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.749678 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-kube-api-access-s7fwp" (OuterVolumeSpecName: "kube-api-access-s7fwp") pod "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" (UID: "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1"). InnerVolumeSpecName "kube-api-access-s7fwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.771841 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.772097 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fe89aad5-df76-421d-a674-1dc37939f1f4" containerName="glance-log" containerID="cri-o://8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a" gracePeriod=30 Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.772238 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fe89aad5-df76-421d-a674-1dc37939f1f4" containerName="glance-httpd" containerID="cri-o://012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1" gracePeriod=30 Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.791662 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" (UID: "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.830094 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" (UID: "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.850890 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.850928 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.850946 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.850956 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.850968 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7fwp\" (UniqueName: \"kubernetes.io/projected/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-kube-api-access-s7fwp\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.850979 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.864625 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-config-data" (OuterVolumeSpecName: "config-data") pod "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" (UID: "b93fa41b-4fbc-42f8-9ba6-68f6b56540f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:07 crc kubenswrapper[4769]: I1006 07:33:07.952625 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.137358 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"602f4ddb-1ad0-440b-b402-0179dd5604b3","Type":"ContainerStarted","Data":"01ab60fba2a5aa90187f3f7d6c3e122155f100da516130563d7b1d46fdb433a8"} Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.175333 4769 generic.go:334] "Generic (PLEG): container finished" podID="fe89aad5-df76-421d-a674-1dc37939f1f4" containerID="8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a" exitCode=143 Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.204299 4769 generic.go:334] "Generic (PLEG): container finished" podID="7faa379c-e24f-4820-87d8-4e94e641f298" containerID="8324ad13c1a3c4c51ce4233012ba5c7a5269d519deacff8bea8681abf29fb22e" exitCode=0 Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.211216 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe89aad5-df76-421d-a674-1dc37939f1f4","Type":"ContainerDied","Data":"8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a"} Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.211265 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dd985fb44-8mpq5" event={"ID":"7faa379c-e24f-4820-87d8-4e94e641f298","Type":"ContainerDied","Data":"8324ad13c1a3c4c51ce4233012ba5c7a5269d519deacff8bea8681abf29fb22e"} Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.249023 4769 generic.go:334] "Generic (PLEG): container finished" podID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerID="5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033" exitCode=0 Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.249113 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1","Type":"ContainerDied","Data":"5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033"} Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.249144 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b93fa41b-4fbc-42f8-9ba6-68f6b56540f1","Type":"ContainerDied","Data":"c7fe8fefd3b04cd976d79e7cbe3ed2fba69f0cad92886e67472d9f0a4d419ba2"} Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.249164 4769 scope.go:117] "RemoveContainer" containerID="45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.249190 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.259507 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7b475fd569-c56qk" event={"ID":"ef915e60-c2fd-4336-84f3-62b2cfa713a9","Type":"ContainerStarted","Data":"e5d71cdcada6d9a0c179617dcfb674c16bf15973720f6bfe1991549c073ae041"} Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.260070 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.260261 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.270330 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.284706 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.297375 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:08 crc kubenswrapper[4769]: E1006 07:33:08.297707 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="sg-core" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.297719 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="sg-core" Oct 06 07:33:08 crc kubenswrapper[4769]: E1006 07:33:08.297735 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="ceilometer-central-agent" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.297743 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="ceilometer-central-agent" Oct 06 07:33:08 crc kubenswrapper[4769]: E1006 07:33:08.297767 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="ceilometer-notification-agent" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.297773 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="ceilometer-notification-agent" Oct 06 07:33:08 crc kubenswrapper[4769]: E1006 07:33:08.297789 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="proxy-httpd" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.300138 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="proxy-httpd" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.300341 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="sg-core" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.300350 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="proxy-httpd" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.300360 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="ceilometer-central-agent" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.300378 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" containerName="ceilometer-notification-agent" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.302178 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.305124 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.306163 4769 scope.go:117] "RemoveContainer" containerID="52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.307445 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7b475fd569-c56qk" podStartSLOduration=3.307435635 podStartE2EDuration="3.307435635s" podCreationTimestamp="2025-10-06 07:33:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:33:08.30505383 +0000 UTC m=+984.829334977" watchObservedRunningTime="2025-10-06 07:33:08.307435635 +0000 UTC m=+984.831716772" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.308237 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.347259 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.388772 4769 scope.go:117] "RemoveContainer" containerID="5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.425530 4769 scope.go:117] "RemoveContainer" containerID="d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.470368 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.470459 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-log-httpd\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.470493 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt2j5\" (UniqueName: \"kubernetes.io/projected/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-kube-api-access-dt2j5\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.470523 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.470574 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-config-data\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.470650 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-run-httpd\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.470674 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-scripts\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.479695 4769 scope.go:117] "RemoveContainer" containerID="45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e" Oct 06 07:33:08 crc kubenswrapper[4769]: E1006 07:33:08.480414 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e\": container with ID starting with 45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e not found: ID does not exist" containerID="45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.480508 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e"} err="failed to get container status \"45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e\": rpc error: code = NotFound desc = could not find container \"45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e\": container with ID starting with 45006d0d5a1c8325ccf47d1fb3c1f19c1d80d895347c599f9bd1b6475121fb2e not found: ID does not exist" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.480544 4769 scope.go:117] "RemoveContainer" containerID="52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0" Oct 06 07:33:08 crc kubenswrapper[4769]: E1006 07:33:08.480820 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0\": container with ID starting with 52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0 not found: ID does not exist" containerID="52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.480841 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0"} err="failed to get container status \"52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0\": rpc error: code = NotFound desc = could not find container \"52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0\": container with ID starting with 52c20d1b21a3f78a03d6f8335170ca7b1a35dafd919a10b88b75b7b2442163b0 not found: ID does not exist" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.480853 4769 scope.go:117] "RemoveContainer" containerID="5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033" Oct 06 07:33:08 crc kubenswrapper[4769]: E1006 07:33:08.481302 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033\": container with ID starting with 5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033 not found: ID does not exist" containerID="5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.481326 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033"} err="failed to get container status \"5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033\": rpc error: code = NotFound desc = could not find container \"5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033\": container with ID starting with 5716573e3308dcf71eab6fb897b271cd0145ad266cd047b7d87fff774d59f033 not found: ID does not exist" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.481344 4769 scope.go:117] "RemoveContainer" containerID="d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf" Oct 06 07:33:08 crc kubenswrapper[4769]: E1006 07:33:08.481659 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf\": container with ID starting with d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf not found: ID does not exist" containerID="d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.481682 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf"} err="failed to get container status \"d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf\": rpc error: code = NotFound desc = could not find container \"d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf\": container with ID starting with d5b42ab10096df59d13e2df1c5a7662f4abd88facd2c90d14223ea3d412c14cf not found: ID does not exist" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.572157 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-log-httpd\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.572208 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt2j5\" (UniqueName: \"kubernetes.io/projected/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-kube-api-access-dt2j5\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.572235 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.572287 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-config-data\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.572339 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-run-httpd\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.572362 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-scripts\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.572437 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.573114 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-run-httpd\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.573169 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-log-httpd\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.580271 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.581836 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.584050 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-config-data\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.589987 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-scripts\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.595920 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt2j5\" (UniqueName: \"kubernetes.io/projected/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-kube-api-access-dt2j5\") pod \"ceilometer-0\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " pod="openstack/ceilometer-0" Oct 06 07:33:08 crc kubenswrapper[4769]: I1006 07:33:08.658830 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:09 crc kubenswrapper[4769]: I1006 07:33:09.148303 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:33:09 crc kubenswrapper[4769]: I1006 07:33:09.148886 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" containerName="glance-log" containerID="cri-o://2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f" gracePeriod=30 Oct 06 07:33:09 crc kubenswrapper[4769]: I1006 07:33:09.149248 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" containerName="glance-httpd" containerID="cri-o://da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce" gracePeriod=30 Oct 06 07:33:09 crc kubenswrapper[4769]: I1006 07:33:09.168538 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:09 crc kubenswrapper[4769]: W1006 07:33:09.188581 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6c3a423_3e74_4f17_95e1_b8d0af99d50b.slice/crio-be5c195e3939f52d3e35086146633a4f8b3c56222071388d6db2f93cd3475572 WatchSource:0}: Error finding container be5c195e3939f52d3e35086146633a4f8b3c56222071388d6db2f93cd3475572: Status 404 returned error can't find the container with id be5c195e3939f52d3e35086146633a4f8b3c56222071388d6db2f93cd3475572 Oct 06 07:33:09 crc kubenswrapper[4769]: I1006 07:33:09.275685 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"602f4ddb-1ad0-440b-b402-0179dd5604b3","Type":"ContainerStarted","Data":"0d82bd180ed7f723d37329b7c2dbf9fd589a29ceac1ffa973759443d77d019a7"} Oct 06 07:33:09 crc kubenswrapper[4769]: I1006 07:33:09.276570 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6c3a423-3e74-4f17-95e1-b8d0af99d50b","Type":"ContainerStarted","Data":"be5c195e3939f52d3e35086146633a4f8b3c56222071388d6db2f93cd3475572"} Oct 06 07:33:09 crc kubenswrapper[4769]: I1006 07:33:09.301893 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.30187798 podStartE2EDuration="3.30187798s" podCreationTimestamp="2025-10-06 07:33:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:33:09.297009227 +0000 UTC m=+985.821290384" watchObservedRunningTime="2025-10-06 07:33:09.30187798 +0000 UTC m=+985.826159127" Oct 06 07:33:09 crc kubenswrapper[4769]: I1006 07:33:09.485640 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.042297 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.112073 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zrkw\" (UniqueName: \"kubernetes.io/projected/fe89aad5-df76-421d-a674-1dc37939f1f4-kube-api-access-5zrkw\") pod \"fe89aad5-df76-421d-a674-1dc37939f1f4\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.112167 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-public-tls-certs\") pod \"fe89aad5-df76-421d-a674-1dc37939f1f4\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.112223 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-combined-ca-bundle\") pod \"fe89aad5-df76-421d-a674-1dc37939f1f4\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.112313 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe89aad5-df76-421d-a674-1dc37939f1f4-logs\") pod \"fe89aad5-df76-421d-a674-1dc37939f1f4\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.112348 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-config-data\") pod \"fe89aad5-df76-421d-a674-1dc37939f1f4\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.112386 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"fe89aad5-df76-421d-a674-1dc37939f1f4\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.112526 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-scripts\") pod \"fe89aad5-df76-421d-a674-1dc37939f1f4\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.112585 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe89aad5-df76-421d-a674-1dc37939f1f4-httpd-run\") pod \"fe89aad5-df76-421d-a674-1dc37939f1f4\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.112981 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe89aad5-df76-421d-a674-1dc37939f1f4-logs" (OuterVolumeSpecName: "logs") pod "fe89aad5-df76-421d-a674-1dc37939f1f4" (UID: "fe89aad5-df76-421d-a674-1dc37939f1f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.113159 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe89aad5-df76-421d-a674-1dc37939f1f4-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.113274 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe89aad5-df76-421d-a674-1dc37939f1f4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fe89aad5-df76-421d-a674-1dc37939f1f4" (UID: "fe89aad5-df76-421d-a674-1dc37939f1f4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.124585 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-scripts" (OuterVolumeSpecName: "scripts") pod "fe89aad5-df76-421d-a674-1dc37939f1f4" (UID: "fe89aad5-df76-421d-a674-1dc37939f1f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.128666 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe89aad5-df76-421d-a674-1dc37939f1f4-kube-api-access-5zrkw" (OuterVolumeSpecName: "kube-api-access-5zrkw") pod "fe89aad5-df76-421d-a674-1dc37939f1f4" (UID: "fe89aad5-df76-421d-a674-1dc37939f1f4"). InnerVolumeSpecName "kube-api-access-5zrkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.134057 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "fe89aad5-df76-421d-a674-1dc37939f1f4" (UID: "fe89aad5-df76-421d-a674-1dc37939f1f4"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.202286 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fe89aad5-df76-421d-a674-1dc37939f1f4" (UID: "fe89aad5-df76-421d-a674-1dc37939f1f4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.205868 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93fa41b-4fbc-42f8-9ba6-68f6b56540f1" path="/var/lib/kubelet/pods/b93fa41b-4fbc-42f8-9ba6-68f6b56540f1/volumes" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.213717 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe89aad5-df76-421d-a674-1dc37939f1f4" (UID: "fe89aad5-df76-421d-a674-1dc37939f1f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.213800 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-combined-ca-bundle\") pod \"fe89aad5-df76-421d-a674-1dc37939f1f4\" (UID: \"fe89aad5-df76-421d-a674-1dc37939f1f4\") " Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.214455 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zrkw\" (UniqueName: \"kubernetes.io/projected/fe89aad5-df76-421d-a674-1dc37939f1f4-kube-api-access-5zrkw\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.214472 4769 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.214500 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.214513 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.214527 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe89aad5-df76-421d-a674-1dc37939f1f4-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:10 crc kubenswrapper[4769]: W1006 07:33:10.215282 4769 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fe89aad5-df76-421d-a674-1dc37939f1f4/volumes/kubernetes.io~secret/combined-ca-bundle Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.215304 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe89aad5-df76-421d-a674-1dc37939f1f4" (UID: "fe89aad5-df76-421d-a674-1dc37939f1f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.246768 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.283575 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-config-data" (OuterVolumeSpecName: "config-data") pod "fe89aad5-df76-421d-a674-1dc37939f1f4" (UID: "fe89aad5-df76-421d-a674-1dc37939f1f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.306411 4769 generic.go:334] "Generic (PLEG): container finished" podID="ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" containerID="2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f" exitCode=143 Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.317437 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.317465 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe89aad5-df76-421d-a674-1dc37939f1f4-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.317494 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.329180 4769 generic.go:334] "Generic (PLEG): container finished" podID="fe89aad5-df76-421d-a674-1dc37939f1f4" containerID="012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1" exitCode=0 Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.329303 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.348285 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d","Type":"ContainerDied","Data":"2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f"} Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.348650 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6c3a423-3e74-4f17-95e1-b8d0af99d50b","Type":"ContainerStarted","Data":"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734"} Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.348666 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6c3a423-3e74-4f17-95e1-b8d0af99d50b","Type":"ContainerStarted","Data":"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b"} Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.348675 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe89aad5-df76-421d-a674-1dc37939f1f4","Type":"ContainerDied","Data":"012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1"} Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.348693 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe89aad5-df76-421d-a674-1dc37939f1f4","Type":"ContainerDied","Data":"510c99935126fb1b9a5878f3813e1431ba346d0b8a347793dda61fb2f5ffed2a"} Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.348709 4769 scope.go:117] "RemoveContainer" containerID="012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.386728 4769 scope.go:117] "RemoveContainer" containerID="8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.391812 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.441546 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.460201 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:33:10 crc kubenswrapper[4769]: E1006 07:33:10.460610 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe89aad5-df76-421d-a674-1dc37939f1f4" containerName="glance-httpd" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.460624 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe89aad5-df76-421d-a674-1dc37939f1f4" containerName="glance-httpd" Oct 06 07:33:10 crc kubenswrapper[4769]: E1006 07:33:10.460645 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe89aad5-df76-421d-a674-1dc37939f1f4" containerName="glance-log" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.460651 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe89aad5-df76-421d-a674-1dc37939f1f4" containerName="glance-log" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.460835 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe89aad5-df76-421d-a674-1dc37939f1f4" containerName="glance-log" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.460869 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe89aad5-df76-421d-a674-1dc37939f1f4" containerName="glance-httpd" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.462178 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.470358 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.486611 4769 scope.go:117] "RemoveContainer" containerID="012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.487078 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.487247 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 06 07:33:10 crc kubenswrapper[4769]: E1006 07:33:10.491547 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1\": container with ID starting with 012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1 not found: ID does not exist" containerID="012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.491604 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1"} err="failed to get container status \"012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1\": rpc error: code = NotFound desc = could not find container \"012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1\": container with ID starting with 012988795d448b0e8423989f373a30d384d0bd0494dd2c34945d6c5c04a05cb1 not found: ID does not exist" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.491625 4769 scope.go:117] "RemoveContainer" containerID="8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a" Oct 06 07:33:10 crc kubenswrapper[4769]: E1006 07:33:10.493537 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a\": container with ID starting with 8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a not found: ID does not exist" containerID="8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.493562 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a"} err="failed to get container status \"8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a\": rpc error: code = NotFound desc = could not find container \"8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a\": container with ID starting with 8754e21580fe9a6e6e29cab9687598664822a027e5dca5e6861ac0d8eb9e0d8a not found: ID does not exist" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.525897 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8rtm\" (UniqueName: \"kubernetes.io/projected/5f9369da-2ad7-4cd1-a161-41ece66008e0-kube-api-access-z8rtm\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.525977 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f9369da-2ad7-4cd1-a161-41ece66008e0-scripts\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.525997 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f9369da-2ad7-4cd1-a161-41ece66008e0-logs\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.526033 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f9369da-2ad7-4cd1-a161-41ece66008e0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.526058 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f9369da-2ad7-4cd1-a161-41ece66008e0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.526096 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f9369da-2ad7-4cd1-a161-41ece66008e0-config-data\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.526127 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f9369da-2ad7-4cd1-a161-41ece66008e0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.526165 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.627960 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f9369da-2ad7-4cd1-a161-41ece66008e0-scripts\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.628003 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f9369da-2ad7-4cd1-a161-41ece66008e0-logs\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.628037 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f9369da-2ad7-4cd1-a161-41ece66008e0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.628057 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f9369da-2ad7-4cd1-a161-41ece66008e0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.628089 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f9369da-2ad7-4cd1-a161-41ece66008e0-config-data\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.628855 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f9369da-2ad7-4cd1-a161-41ece66008e0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.628897 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.628991 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8rtm\" (UniqueName: \"kubernetes.io/projected/5f9369da-2ad7-4cd1-a161-41ece66008e0-kube-api-access-z8rtm\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.629316 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f9369da-2ad7-4cd1-a161-41ece66008e0-logs\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.629469 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.630316 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5f9369da-2ad7-4cd1-a161-41ece66008e0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.634780 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f9369da-2ad7-4cd1-a161-41ece66008e0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.638862 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f9369da-2ad7-4cd1-a161-41ece66008e0-scripts\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.639312 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f9369da-2ad7-4cd1-a161-41ece66008e0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.650117 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f9369da-2ad7-4cd1-a161-41ece66008e0-config-data\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.658195 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.661304 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8rtm\" (UniqueName: \"kubernetes.io/projected/5f9369da-2ad7-4cd1-a161-41ece66008e0-kube-api-access-z8rtm\") pod \"glance-default-external-api-0\" (UID: \"5f9369da-2ad7-4cd1-a161-41ece66008e0\") " pod="openstack/glance-default-external-api-0" Oct 06 07:33:10 crc kubenswrapper[4769]: I1006 07:33:10.865780 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.045226 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.080018 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.139015 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-httpd-run\") pod \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.139582 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2s86\" (UniqueName: \"kubernetes.io/projected/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-kube-api-access-d2s86\") pod \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.139532 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" (UID: "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.139657 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-combined-ca-bundle\") pod \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.139683 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.140553 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-internal-tls-certs\") pod \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.140581 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-logs\") pod \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.140671 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-config-data\") pod \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.140730 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-scripts\") pod \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\" (UID: \"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d\") " Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.141162 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.142483 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-logs" (OuterVolumeSpecName: "logs") pod "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" (UID: "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.167616 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-kube-api-access-d2s86" (OuterVolumeSpecName: "kube-api-access-d2s86") pod "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" (UID: "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d"). InnerVolumeSpecName "kube-api-access-d2s86". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.169815 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" (UID: "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.173784 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-scripts" (OuterVolumeSpecName: "scripts") pod "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" (UID: "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.215556 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" (UID: "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.250167 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2s86\" (UniqueName: \"kubernetes.io/projected/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-kube-api-access-d2s86\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.250194 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.250212 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.250221 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.250231 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.296696 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" (UID: "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.301748 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.315197 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-config-data" (OuterVolumeSpecName: "config-data") pod "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" (UID: "ea2bd31c-b5f9-41c5-ae05-ed8837acc76d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.351465 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.351496 4769 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.351508 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.364403 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6c3a423-3e74-4f17-95e1-b8d0af99d50b","Type":"ContainerStarted","Data":"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81"} Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.382661 4769 generic.go:334] "Generic (PLEG): container finished" podID="ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" containerID="da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce" exitCode=0 Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.382708 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d","Type":"ContainerDied","Data":"da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce"} Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.382733 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ea2bd31c-b5f9-41c5-ae05-ed8837acc76d","Type":"ContainerDied","Data":"5999fa526c7bc55cb7034d831f1906f122d91504a3dc34201aa0d0873a933ff2"} Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.382754 4769 scope.go:117] "RemoveContainer" containerID="da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.382886 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.392557 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.435529 4769 scope.go:117] "RemoveContainer" containerID="2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.460133 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.476788 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.490753 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:33:11 crc kubenswrapper[4769]: E1006 07:33:11.491280 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" containerName="glance-httpd" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.491304 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" containerName="glance-httpd" Oct 06 07:33:11 crc kubenswrapper[4769]: E1006 07:33:11.491332 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" containerName="glance-log" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.491341 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" containerName="glance-log" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.496907 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" containerName="glance-httpd" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.496944 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" containerName="glance-log" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.497843 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.497932 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.502593 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.502883 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.509527 4769 scope.go:117] "RemoveContainer" containerID="da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce" Oct 06 07:33:11 crc kubenswrapper[4769]: E1006 07:33:11.509822 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce\": container with ID starting with da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce not found: ID does not exist" containerID="da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.509847 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce"} err="failed to get container status \"da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce\": rpc error: code = NotFound desc = could not find container \"da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce\": container with ID starting with da944ae8b48a8feb85e65f71acb697dace3e0fa1ac85e9da3d21405101f467ce not found: ID does not exist" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.509867 4769 scope.go:117] "RemoveContainer" containerID="2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f" Oct 06 07:33:11 crc kubenswrapper[4769]: E1006 07:33:11.510031 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f\": container with ID starting with 2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f not found: ID does not exist" containerID="2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.510047 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f"} err="failed to get container status \"2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f\": rpc error: code = NotFound desc = could not find container \"2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f\": container with ID starting with 2567eb0e3b80208e0339d802fa45e31f1f189874e91541f34882a44b0d34975f not found: ID does not exist" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.514365 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.557144 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f6baf3d-2817-453a-b212-4b8860056e9f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.557191 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f6baf3d-2817-453a-b212-4b8860056e9f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.557258 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f6baf3d-2817-453a-b212-4b8860056e9f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.557324 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6baf3d-2817-453a-b212-4b8860056e9f-logs\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.557345 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttrmx\" (UniqueName: \"kubernetes.io/projected/3f6baf3d-2817-453a-b212-4b8860056e9f-kube-api-access-ttrmx\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.557376 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.557398 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f6baf3d-2817-453a-b212-4b8860056e9f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.557484 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f6baf3d-2817-453a-b212-4b8860056e9f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.660899 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f6baf3d-2817-453a-b212-4b8860056e9f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.660980 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6baf3d-2817-453a-b212-4b8860056e9f-logs\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.661011 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttrmx\" (UniqueName: \"kubernetes.io/projected/3f6baf3d-2817-453a-b212-4b8860056e9f-kube-api-access-ttrmx\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.661039 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.661059 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f6baf3d-2817-453a-b212-4b8860056e9f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.661091 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f6baf3d-2817-453a-b212-4b8860056e9f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.661118 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f6baf3d-2817-453a-b212-4b8860056e9f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.661134 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f6baf3d-2817-453a-b212-4b8860056e9f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.661964 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.663368 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f6baf3d-2817-453a-b212-4b8860056e9f-logs\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.663837 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f6baf3d-2817-453a-b212-4b8860056e9f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.666948 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f6baf3d-2817-453a-b212-4b8860056e9f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.667018 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f6baf3d-2817-453a-b212-4b8860056e9f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.667304 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f6baf3d-2817-453a-b212-4b8860056e9f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.667533 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f6baf3d-2817-453a-b212-4b8860056e9f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.678204 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttrmx\" (UniqueName: \"kubernetes.io/projected/3f6baf3d-2817-453a-b212-4b8860056e9f-kube-api-access-ttrmx\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.695218 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"3f6baf3d-2817-453a-b212-4b8860056e9f\") " pod="openstack/glance-default-internal-api-0" Oct 06 07:33:11 crc kubenswrapper[4769]: I1006 07:33:11.819878 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:12 crc kubenswrapper[4769]: I1006 07:33:12.179019 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea2bd31c-b5f9-41c5-ae05-ed8837acc76d" path="/var/lib/kubelet/pods/ea2bd31c-b5f9-41c5-ae05-ed8837acc76d/volumes" Oct 06 07:33:12 crc kubenswrapper[4769]: I1006 07:33:12.179987 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe89aad5-df76-421d-a674-1dc37939f1f4" path="/var/lib/kubelet/pods/fe89aad5-df76-421d-a674-1dc37939f1f4/volumes" Oct 06 07:33:12 crc kubenswrapper[4769]: I1006 07:33:12.398609 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5f9369da-2ad7-4cd1-a161-41ece66008e0","Type":"ContainerStarted","Data":"6a58c7c9e036287a5e2266b537a4afad1bb102478b9e1716d187e7da9cd2bed5"} Oct 06 07:33:12 crc kubenswrapper[4769]: I1006 07:33:12.398972 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5f9369da-2ad7-4cd1-a161-41ece66008e0","Type":"ContainerStarted","Data":"405f4ab3b98dc1673e69c75d17278789a2f5489c0991a29abbdbd5247db124fd"} Oct 06 07:33:12 crc kubenswrapper[4769]: I1006 07:33:12.411742 4769 generic.go:334] "Generic (PLEG): container finished" podID="7faa379c-e24f-4820-87d8-4e94e641f298" containerID="e0568f4d060c057d1c843efbe83d0b97496a11e921085f3b85dde6fe6f7c7075" exitCode=0 Oct 06 07:33:12 crc kubenswrapper[4769]: I1006 07:33:12.411791 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dd985fb44-8mpq5" event={"ID":"7faa379c-e24f-4820-87d8-4e94e641f298","Type":"ContainerDied","Data":"e0568f4d060c057d1c843efbe83d0b97496a11e921085f3b85dde6fe6f7c7075"} Oct 06 07:33:16 crc kubenswrapper[4769]: I1006 07:33:16.053296 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7b475fd569-c56qk" Oct 06 07:33:16 crc kubenswrapper[4769]: I1006 07:33:16.526787 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 06 07:33:17 crc kubenswrapper[4769]: I1006 07:33:17.875206 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:33:17 crc kubenswrapper[4769]: I1006 07:33:17.978060 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 06 07:33:17 crc kubenswrapper[4769]: I1006 07:33:17.986057 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-config\") pod \"7faa379c-e24f-4820-87d8-4e94e641f298\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " Oct 06 07:33:17 crc kubenswrapper[4769]: I1006 07:33:17.986119 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-httpd-config\") pod \"7faa379c-e24f-4820-87d8-4e94e641f298\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " Oct 06 07:33:17 crc kubenswrapper[4769]: I1006 07:33:17.986163 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l6wx\" (UniqueName: \"kubernetes.io/projected/7faa379c-e24f-4820-87d8-4e94e641f298-kube-api-access-2l6wx\") pod \"7faa379c-e24f-4820-87d8-4e94e641f298\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " Oct 06 07:33:17 crc kubenswrapper[4769]: I1006 07:33:17.986224 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-combined-ca-bundle\") pod \"7faa379c-e24f-4820-87d8-4e94e641f298\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " Oct 06 07:33:17 crc kubenswrapper[4769]: I1006 07:33:17.986482 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-ovndb-tls-certs\") pod \"7faa379c-e24f-4820-87d8-4e94e641f298\" (UID: \"7faa379c-e24f-4820-87d8-4e94e641f298\") " Oct 06 07:33:17 crc kubenswrapper[4769]: I1006 07:33:17.997807 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7faa379c-e24f-4820-87d8-4e94e641f298-kube-api-access-2l6wx" (OuterVolumeSpecName: "kube-api-access-2l6wx") pod "7faa379c-e24f-4820-87d8-4e94e641f298" (UID: "7faa379c-e24f-4820-87d8-4e94e641f298"). InnerVolumeSpecName "kube-api-access-2l6wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:17 crc kubenswrapper[4769]: I1006 07:33:17.999675 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7faa379c-e24f-4820-87d8-4e94e641f298" (UID: "7faa379c-e24f-4820-87d8-4e94e641f298"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.073862 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7faa379c-e24f-4820-87d8-4e94e641f298" (UID: "7faa379c-e24f-4820-87d8-4e94e641f298"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.088972 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-config" (OuterVolumeSpecName: "config") pod "7faa379c-e24f-4820-87d8-4e94e641f298" (UID: "7faa379c-e24f-4820-87d8-4e94e641f298"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.089459 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.089496 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.089509 4769 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.089519 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l6wx\" (UniqueName: \"kubernetes.io/projected/7faa379c-e24f-4820-87d8-4e94e641f298-kube-api-access-2l6wx\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.100194 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7faa379c-e24f-4820-87d8-4e94e641f298" (UID: "7faa379c-e24f-4820-87d8-4e94e641f298"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.191844 4769 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7faa379c-e24f-4820-87d8-4e94e641f298-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.486660 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f6baf3d-2817-453a-b212-4b8860056e9f","Type":"ContainerStarted","Data":"3a076d2e34c93bc336e18606f37deba75cfe4c26a84786b7fe61d40ecc91e4cc"} Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.494060 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5dd985fb44-8mpq5" event={"ID":"7faa379c-e24f-4820-87d8-4e94e641f298","Type":"ContainerDied","Data":"7a41cad3f6e7a943c6a261ae256aacc9d731b6b365cc852a9aa0276cfd6a6455"} Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.494094 4769 scope.go:117] "RemoveContainer" containerID="8324ad13c1a3c4c51ce4233012ba5c7a5269d519deacff8bea8681abf29fb22e" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.494192 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5dd985fb44-8mpq5" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.500579 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6c3a423-3e74-4f17-95e1-b8d0af99d50b","Type":"ContainerStarted","Data":"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41"} Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.500725 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="ceilometer-central-agent" containerID="cri-o://13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b" gracePeriod=30 Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.500913 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.500929 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="ceilometer-notification-agent" containerID="cri-o://89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734" gracePeriod=30 Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.500947 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="sg-core" containerID="cri-o://6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81" gracePeriod=30 Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.500989 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="proxy-httpd" containerID="cri-o://f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41" gracePeriod=30 Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.509038 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6f346861-ee62-493c-82bd-2ea7fa7347e6","Type":"ContainerStarted","Data":"95055231fe8f0046ef2f2a56359bd8cce5bead7530e65620fd24fa6234750c68"} Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.523178 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5f9369da-2ad7-4cd1-a161-41ece66008e0","Type":"ContainerStarted","Data":"0c2d9d6b9d44b930db8b036c2d06a5b68b0b8c6f7ae006d7e372f01c92e874ce"} Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.535189 4769 scope.go:117] "RemoveContainer" containerID="e0568f4d060c057d1c843efbe83d0b97496a11e921085f3b85dde6fe6f7c7075" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.535365 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5dd985fb44-8mpq5"] Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.546771 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5dd985fb44-8mpq5"] Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.549639 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.100075219 podStartE2EDuration="17.549628876s" podCreationTimestamp="2025-10-06 07:33:01 +0000 UTC" firstStartedPulling="2025-10-06 07:33:03.076284411 +0000 UTC m=+979.600565558" lastFinishedPulling="2025-10-06 07:33:17.525838068 +0000 UTC m=+994.050119215" observedRunningTime="2025-10-06 07:33:18.540031713 +0000 UTC m=+995.064312860" watchObservedRunningTime="2025-10-06 07:33:18.549628876 +0000 UTC m=+995.073910043" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.572784 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.382153112 podStartE2EDuration="10.572655715s" podCreationTimestamp="2025-10-06 07:33:08 +0000 UTC" firstStartedPulling="2025-10-06 07:33:09.203540803 +0000 UTC m=+985.727821950" lastFinishedPulling="2025-10-06 07:33:17.394043396 +0000 UTC m=+993.918324553" observedRunningTime="2025-10-06 07:33:18.570250239 +0000 UTC m=+995.094531376" watchObservedRunningTime="2025-10-06 07:33:18.572655715 +0000 UTC m=+995.096936862" Oct 06 07:33:18 crc kubenswrapper[4769]: E1006 07:33:18.581965 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7faa379c_e24f_4820_87d8_4e94e641f298.slice/crio-7a41cad3f6e7a943c6a261ae256aacc9d731b6b365cc852a9aa0276cfd6a6455\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7faa379c_e24f_4820_87d8_4e94e641f298.slice\": RecentStats: unable to find data in memory cache]" Oct 06 07:33:18 crc kubenswrapper[4769]: I1006 07:33:18.598602 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.598582243 podStartE2EDuration="8.598582243s" podCreationTimestamp="2025-10-06 07:33:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:33:18.595643943 +0000 UTC m=+995.119925090" watchObservedRunningTime="2025-10-06 07:33:18.598582243 +0000 UTC m=+995.122863390" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.414241 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.522269 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-sg-core-conf-yaml\") pod \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.522327 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-run-httpd\") pod \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.522365 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-config-data\") pod \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.522386 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-log-httpd\") pod \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.522447 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-combined-ca-bundle\") pod \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.522470 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-scripts\") pod \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.522518 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt2j5\" (UniqueName: \"kubernetes.io/projected/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-kube-api-access-dt2j5\") pod \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\" (UID: \"a6c3a423-3e74-4f17-95e1-b8d0af99d50b\") " Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.525122 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a6c3a423-3e74-4f17-95e1-b8d0af99d50b" (UID: "a6c3a423-3e74-4f17-95e1-b8d0af99d50b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.526365 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a6c3a423-3e74-4f17-95e1-b8d0af99d50b" (UID: "a6c3a423-3e74-4f17-95e1-b8d0af99d50b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.526579 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-kube-api-access-dt2j5" (OuterVolumeSpecName: "kube-api-access-dt2j5") pod "a6c3a423-3e74-4f17-95e1-b8d0af99d50b" (UID: "a6c3a423-3e74-4f17-95e1-b8d0af99d50b"). InnerVolumeSpecName "kube-api-access-dt2j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.528743 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-scripts" (OuterVolumeSpecName: "scripts") pod "a6c3a423-3e74-4f17-95e1-b8d0af99d50b" (UID: "a6c3a423-3e74-4f17-95e1-b8d0af99d50b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.538690 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f6baf3d-2817-453a-b212-4b8860056e9f","Type":"ContainerStarted","Data":"f1202fab8486fcb89530255e6847a6d7d850ef0fb41f118dbb005381b553d48c"} Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.538739 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3f6baf3d-2817-453a-b212-4b8860056e9f","Type":"ContainerStarted","Data":"a612a90b0ae209c125021ee0af5ee6cf77c080547842c17622f99037f0a15aad"} Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.543675 4769 generic.go:334] "Generic (PLEG): container finished" podID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerID="f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41" exitCode=0 Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.543707 4769 generic.go:334] "Generic (PLEG): container finished" podID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerID="6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81" exitCode=2 Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.543714 4769 generic.go:334] "Generic (PLEG): container finished" podID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerID="89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734" exitCode=0 Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.543715 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.543735 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6c3a423-3e74-4f17-95e1-b8d0af99d50b","Type":"ContainerDied","Data":"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41"} Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.543764 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6c3a423-3e74-4f17-95e1-b8d0af99d50b","Type":"ContainerDied","Data":"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81"} Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.543773 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6c3a423-3e74-4f17-95e1-b8d0af99d50b","Type":"ContainerDied","Data":"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734"} Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.543781 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6c3a423-3e74-4f17-95e1-b8d0af99d50b","Type":"ContainerDied","Data":"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b"} Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.543796 4769 scope.go:117] "RemoveContainer" containerID="f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.543720 4769 generic.go:334] "Generic (PLEG): container finished" podID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerID="13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b" exitCode=0 Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.543938 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a6c3a423-3e74-4f17-95e1-b8d0af99d50b","Type":"ContainerDied","Data":"be5c195e3939f52d3e35086146633a4f8b3c56222071388d6db2f93cd3475572"} Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.551753 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a6c3a423-3e74-4f17-95e1-b8d0af99d50b" (UID: "a6c3a423-3e74-4f17-95e1-b8d0af99d50b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.564053 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.564034576 podStartE2EDuration="8.564034576s" podCreationTimestamp="2025-10-06 07:33:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:33:19.557686123 +0000 UTC m=+996.081967280" watchObservedRunningTime="2025-10-06 07:33:19.564034576 +0000 UTC m=+996.088315723" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.591552 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6c3a423-3e74-4f17-95e1-b8d0af99d50b" (UID: "a6c3a423-3e74-4f17-95e1-b8d0af99d50b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.618518 4769 scope.go:117] "RemoveContainer" containerID="6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.627245 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.627279 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.627287 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.627296 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.627304 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.627312 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt2j5\" (UniqueName: \"kubernetes.io/projected/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-kube-api-access-dt2j5\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.641418 4769 scope.go:117] "RemoveContainer" containerID="89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.655169 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-config-data" (OuterVolumeSpecName: "config-data") pod "a6c3a423-3e74-4f17-95e1-b8d0af99d50b" (UID: "a6c3a423-3e74-4f17-95e1-b8d0af99d50b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.662030 4769 scope.go:117] "RemoveContainer" containerID="13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.689369 4769 scope.go:117] "RemoveContainer" containerID="f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41" Oct 06 07:33:19 crc kubenswrapper[4769]: E1006 07:33:19.689818 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41\": container with ID starting with f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41 not found: ID does not exist" containerID="f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.689847 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41"} err="failed to get container status \"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41\": rpc error: code = NotFound desc = could not find container \"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41\": container with ID starting with f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41 not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.689866 4769 scope.go:117] "RemoveContainer" containerID="6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81" Oct 06 07:33:19 crc kubenswrapper[4769]: E1006 07:33:19.690280 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81\": container with ID starting with 6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81 not found: ID does not exist" containerID="6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.690299 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81"} err="failed to get container status \"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81\": rpc error: code = NotFound desc = could not find container \"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81\": container with ID starting with 6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81 not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.690314 4769 scope.go:117] "RemoveContainer" containerID="89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734" Oct 06 07:33:19 crc kubenswrapper[4769]: E1006 07:33:19.690587 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734\": container with ID starting with 89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734 not found: ID does not exist" containerID="89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.690605 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734"} err="failed to get container status \"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734\": rpc error: code = NotFound desc = could not find container \"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734\": container with ID starting with 89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734 not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.690616 4769 scope.go:117] "RemoveContainer" containerID="13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b" Oct 06 07:33:19 crc kubenswrapper[4769]: E1006 07:33:19.690882 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b\": container with ID starting with 13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b not found: ID does not exist" containerID="13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.690912 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b"} err="failed to get container status \"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b\": rpc error: code = NotFound desc = could not find container \"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b\": container with ID starting with 13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.690925 4769 scope.go:117] "RemoveContainer" containerID="f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.691197 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41"} err="failed to get container status \"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41\": rpc error: code = NotFound desc = could not find container \"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41\": container with ID starting with f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41 not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.691220 4769 scope.go:117] "RemoveContainer" containerID="6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.691646 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81"} err="failed to get container status \"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81\": rpc error: code = NotFound desc = could not find container \"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81\": container with ID starting with 6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81 not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.691665 4769 scope.go:117] "RemoveContainer" containerID="89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.691926 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734"} err="failed to get container status \"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734\": rpc error: code = NotFound desc = could not find container \"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734\": container with ID starting with 89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734 not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.691953 4769 scope.go:117] "RemoveContainer" containerID="13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.692266 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b"} err="failed to get container status \"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b\": rpc error: code = NotFound desc = could not find container \"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b\": container with ID starting with 13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.692312 4769 scope.go:117] "RemoveContainer" containerID="f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.692675 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41"} err="failed to get container status \"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41\": rpc error: code = NotFound desc = could not find container \"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41\": container with ID starting with f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41 not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.692704 4769 scope.go:117] "RemoveContainer" containerID="6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.692960 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81"} err="failed to get container status \"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81\": rpc error: code = NotFound desc = could not find container \"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81\": container with ID starting with 6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81 not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.692979 4769 scope.go:117] "RemoveContainer" containerID="89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.693221 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734"} err="failed to get container status \"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734\": rpc error: code = NotFound desc = could not find container \"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734\": container with ID starting with 89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734 not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.693252 4769 scope.go:117] "RemoveContainer" containerID="13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.693621 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b"} err="failed to get container status \"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b\": rpc error: code = NotFound desc = could not find container \"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b\": container with ID starting with 13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.693662 4769 scope.go:117] "RemoveContainer" containerID="f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.693946 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41"} err="failed to get container status \"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41\": rpc error: code = NotFound desc = could not find container \"f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41\": container with ID starting with f45107ecae843f471b8c2e380b09d9135f9ee68eab516e1aa1f248a67488cf41 not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.693968 4769 scope.go:117] "RemoveContainer" containerID="6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.694272 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81"} err="failed to get container status \"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81\": rpc error: code = NotFound desc = could not find container \"6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81\": container with ID starting with 6effc7b4c86a9370ce547f1bc01c4ecaccedb74b470d9bff7b40ad4d8a8b9a81 not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.694295 4769 scope.go:117] "RemoveContainer" containerID="89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.694608 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734"} err="failed to get container status \"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734\": rpc error: code = NotFound desc = could not find container \"89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734\": container with ID starting with 89129dc7ba50ccd22e8d1cc04015db1f0c7b2ff06ff7263e92629a1d88e36734 not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.694630 4769 scope.go:117] "RemoveContainer" containerID="13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.694895 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b"} err="failed to get container status \"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b\": rpc error: code = NotFound desc = could not find container \"13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b\": container with ID starting with 13a00ad0b55a14440d490d8234f336f9e21db16a357fc8c75f09e42a8c43020b not found: ID does not exist" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.728789 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6c3a423-3e74-4f17-95e1-b8d0af99d50b-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.874467 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.883607 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.896331 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:19 crc kubenswrapper[4769]: E1006 07:33:19.896665 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="ceilometer-notification-agent" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.896682 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="ceilometer-notification-agent" Oct 06 07:33:19 crc kubenswrapper[4769]: E1006 07:33:19.896694 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7faa379c-e24f-4820-87d8-4e94e641f298" containerName="neutron-httpd" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.896701 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7faa379c-e24f-4820-87d8-4e94e641f298" containerName="neutron-httpd" Oct 06 07:33:19 crc kubenswrapper[4769]: E1006 07:33:19.896713 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="proxy-httpd" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.896719 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="proxy-httpd" Oct 06 07:33:19 crc kubenswrapper[4769]: E1006 07:33:19.896729 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7faa379c-e24f-4820-87d8-4e94e641f298" containerName="neutron-api" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.896735 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7faa379c-e24f-4820-87d8-4e94e641f298" containerName="neutron-api" Oct 06 07:33:19 crc kubenswrapper[4769]: E1006 07:33:19.896754 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="ceilometer-central-agent" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.896760 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="ceilometer-central-agent" Oct 06 07:33:19 crc kubenswrapper[4769]: E1006 07:33:19.896782 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="sg-core" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.896788 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="sg-core" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.896951 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="ceilometer-notification-agent" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.896960 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7faa379c-e24f-4820-87d8-4e94e641f298" containerName="neutron-httpd" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.896969 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="ceilometer-central-agent" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.896983 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="proxy-httpd" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.896996 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7faa379c-e24f-4820-87d8-4e94e641f298" containerName="neutron-api" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.897004 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" containerName="sg-core" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.899539 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.905050 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.905399 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.907174 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:19 crc kubenswrapper[4769]: I1006 07:33:19.953550 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:19 crc kubenswrapper[4769]: E1006 07:33:19.954141 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-hlbn9 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="f51c20b4-dad5-4c4e-bf55-be536f089e73" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.033034 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f51c20b4-dad5-4c4e-bf55-be536f089e73-run-httpd\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.033086 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlbn9\" (UniqueName: \"kubernetes.io/projected/f51c20b4-dad5-4c4e-bf55-be536f089e73-kube-api-access-hlbn9\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.033105 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.033136 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.033160 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-scripts\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.033291 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-config-data\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.033362 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f51c20b4-dad5-4c4e-bf55-be536f089e73-log-httpd\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.135707 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.135807 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-scripts\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.135924 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-config-data\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.136016 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f51c20b4-dad5-4c4e-bf55-be536f089e73-log-httpd\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.136299 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f51c20b4-dad5-4c4e-bf55-be536f089e73-run-httpd\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.136378 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlbn9\" (UniqueName: \"kubernetes.io/projected/f51c20b4-dad5-4c4e-bf55-be536f089e73-kube-api-access-hlbn9\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.136484 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.136551 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f51c20b4-dad5-4c4e-bf55-be536f089e73-log-httpd\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.136740 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f51c20b4-dad5-4c4e-bf55-be536f089e73-run-httpd\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.139312 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.140750 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-config-data\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.141055 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-scripts\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.149298 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.164799 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlbn9\" (UniqueName: \"kubernetes.io/projected/f51c20b4-dad5-4c4e-bf55-be536f089e73-kube-api-access-hlbn9\") pod \"ceilometer-0\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.184728 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7faa379c-e24f-4820-87d8-4e94e641f298" path="/var/lib/kubelet/pods/7faa379c-e24f-4820-87d8-4e94e641f298/volumes" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.186206 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c3a423-3e74-4f17-95e1-b8d0af99d50b" path="/var/lib/kubelet/pods/a6c3a423-3e74-4f17-95e1-b8d0af99d50b/volumes" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.554428 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.588845 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.744714 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f51c20b4-dad5-4c4e-bf55-be536f089e73-log-httpd\") pod \"f51c20b4-dad5-4c4e-bf55-be536f089e73\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.744993 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f51c20b4-dad5-4c4e-bf55-be536f089e73-run-httpd\") pod \"f51c20b4-dad5-4c4e-bf55-be536f089e73\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.745021 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-config-data\") pod \"f51c20b4-dad5-4c4e-bf55-be536f089e73\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.745037 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-scripts\") pod \"f51c20b4-dad5-4c4e-bf55-be536f089e73\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.745065 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlbn9\" (UniqueName: \"kubernetes.io/projected/f51c20b4-dad5-4c4e-bf55-be536f089e73-kube-api-access-hlbn9\") pod \"f51c20b4-dad5-4c4e-bf55-be536f089e73\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.745068 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51c20b4-dad5-4c4e-bf55-be536f089e73-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f51c20b4-dad5-4c4e-bf55-be536f089e73" (UID: "f51c20b4-dad5-4c4e-bf55-be536f089e73"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.745152 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-combined-ca-bundle\") pod \"f51c20b4-dad5-4c4e-bf55-be536f089e73\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.745221 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-sg-core-conf-yaml\") pod \"f51c20b4-dad5-4c4e-bf55-be536f089e73\" (UID: \"f51c20b4-dad5-4c4e-bf55-be536f089e73\") " Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.745770 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f51c20b4-dad5-4c4e-bf55-be536f089e73-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.745754 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f51c20b4-dad5-4c4e-bf55-be536f089e73-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f51c20b4-dad5-4c4e-bf55-be536f089e73" (UID: "f51c20b4-dad5-4c4e-bf55-be536f089e73"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.749249 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f51c20b4-dad5-4c4e-bf55-be536f089e73" (UID: "f51c20b4-dad5-4c4e-bf55-be536f089e73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.749359 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-config-data" (OuterVolumeSpecName: "config-data") pod "f51c20b4-dad5-4c4e-bf55-be536f089e73" (UID: "f51c20b4-dad5-4c4e-bf55-be536f089e73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.749409 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f51c20b4-dad5-4c4e-bf55-be536f089e73" (UID: "f51c20b4-dad5-4c4e-bf55-be536f089e73"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.750162 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-scripts" (OuterVolumeSpecName: "scripts") pod "f51c20b4-dad5-4c4e-bf55-be536f089e73" (UID: "f51c20b4-dad5-4c4e-bf55-be536f089e73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.750491 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51c20b4-dad5-4c4e-bf55-be536f089e73-kube-api-access-hlbn9" (OuterVolumeSpecName: "kube-api-access-hlbn9") pod "f51c20b4-dad5-4c4e-bf55-be536f089e73" (UID: "f51c20b4-dad5-4c4e-bf55-be536f089e73"). InnerVolumeSpecName "kube-api-access-hlbn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.847020 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f51c20b4-dad5-4c4e-bf55-be536f089e73-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.847048 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.847059 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.847068 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlbn9\" (UniqueName: \"kubernetes.io/projected/f51c20b4-dad5-4c4e-bf55-be536f089e73-kube-api-access-hlbn9\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.847078 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.847087 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f51c20b4-dad5-4c4e-bf55-be536f089e73-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.866448 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.866489 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.895526 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 07:33:20 crc kubenswrapper[4769]: I1006 07:33:20.912597 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.562973 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.563298 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.563331 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.629777 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.652079 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.662883 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.664903 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.666985 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.667841 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.692743 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.764535 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.764601 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-scripts\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.764702 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95vq\" (UniqueName: \"kubernetes.io/projected/dab255b7-d423-4c61-b8cd-b1062265fe28-kube-api-access-j95vq\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.764725 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-config-data\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.764763 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab255b7-d423-4c61-b8cd-b1062265fe28-run-httpd\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.764782 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab255b7-d423-4c61-b8cd-b1062265fe28-log-httpd\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.765066 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.820310 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.820369 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.855852 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.866519 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.866572 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-scripts\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.866618 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95vq\" (UniqueName: \"kubernetes.io/projected/dab255b7-d423-4c61-b8cd-b1062265fe28-kube-api-access-j95vq\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.866641 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-config-data\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.866680 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab255b7-d423-4c61-b8cd-b1062265fe28-run-httpd\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.866698 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab255b7-d423-4c61-b8cd-b1062265fe28-log-httpd\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.866730 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.872051 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab255b7-d423-4c61-b8cd-b1062265fe28-run-httpd\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.872119 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab255b7-d423-4c61-b8cd-b1062265fe28-log-httpd\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.872591 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.872901 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.882051 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-scripts\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.883592 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-config-data\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.890301 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.893017 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95vq\" (UniqueName: \"kubernetes.io/projected/dab255b7-d423-4c61-b8cd-b1062265fe28-kube-api-access-j95vq\") pod \"ceilometer-0\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " pod="openstack/ceilometer-0" Oct 06 07:33:21 crc kubenswrapper[4769]: I1006 07:33:21.997756 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:22 crc kubenswrapper[4769]: I1006 07:33:22.182327 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51c20b4-dad5-4c4e-bf55-be536f089e73" path="/var/lib/kubelet/pods/f51c20b4-dad5-4c4e-bf55-be536f089e73/volumes" Oct 06 07:33:22 crc kubenswrapper[4769]: I1006 07:33:22.458571 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:22 crc kubenswrapper[4769]: W1006 07:33:22.463881 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddab255b7_d423_4c61_b8cd_b1062265fe28.slice/crio-5c8a49c2e78fe169eff81ad881daa342e0af034f0e47337168027811882d8978 WatchSource:0}: Error finding container 5c8a49c2e78fe169eff81ad881daa342e0af034f0e47337168027811882d8978: Status 404 returned error can't find the container with id 5c8a49c2e78fe169eff81ad881daa342e0af034f0e47337168027811882d8978 Oct 06 07:33:22 crc kubenswrapper[4769]: I1006 07:33:22.573317 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab255b7-d423-4c61-b8cd-b1062265fe28","Type":"ContainerStarted","Data":"5c8a49c2e78fe169eff81ad881daa342e0af034f0e47337168027811882d8978"} Oct 06 07:33:22 crc kubenswrapper[4769]: I1006 07:33:22.573611 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:22 crc kubenswrapper[4769]: I1006 07:33:22.573661 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:23 crc kubenswrapper[4769]: I1006 07:33:23.358517 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 07:33:23 crc kubenswrapper[4769]: I1006 07:33:23.592495 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab255b7-d423-4c61-b8cd-b1062265fe28","Type":"ContainerStarted","Data":"4dc1ba67a60adb484b59038029408f1e4c046637de2caa63b6c2a642ef2191f0"} Oct 06 07:33:24 crc kubenswrapper[4769]: I1006 07:33:24.602947 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab255b7-d423-4c61-b8cd-b1062265fe28","Type":"ContainerStarted","Data":"b1c60753b5b222416b323400e88c56b223bce71bd7f688ad5a58c07985337862"} Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.273342 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.393881 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhlfj\" (UniqueName: \"kubernetes.io/projected/b8f7698c-7da5-44c3-8d16-22c9739462e5-kube-api-access-zhlfj\") pod \"b8f7698c-7da5-44c3-8d16-22c9739462e5\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.393923 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-scripts\") pod \"b8f7698c-7da5-44c3-8d16-22c9739462e5\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.393972 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-combined-ca-bundle\") pod \"b8f7698c-7da5-44c3-8d16-22c9739462e5\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.393988 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-config-data-custom\") pod \"b8f7698c-7da5-44c3-8d16-22c9739462e5\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.394019 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f7698c-7da5-44c3-8d16-22c9739462e5-logs\") pod \"b8f7698c-7da5-44c3-8d16-22c9739462e5\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.394184 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8f7698c-7da5-44c3-8d16-22c9739462e5-etc-machine-id\") pod \"b8f7698c-7da5-44c3-8d16-22c9739462e5\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.394216 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-config-data\") pod \"b8f7698c-7da5-44c3-8d16-22c9739462e5\" (UID: \"b8f7698c-7da5-44c3-8d16-22c9739462e5\") " Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.395723 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8f7698c-7da5-44c3-8d16-22c9739462e5-logs" (OuterVolumeSpecName: "logs") pod "b8f7698c-7da5-44c3-8d16-22c9739462e5" (UID: "b8f7698c-7da5-44c3-8d16-22c9739462e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.395767 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8f7698c-7da5-44c3-8d16-22c9739462e5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b8f7698c-7da5-44c3-8d16-22c9739462e5" (UID: "b8f7698c-7da5-44c3-8d16-22c9739462e5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.401607 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8f7698c-7da5-44c3-8d16-22c9739462e5-kube-api-access-zhlfj" (OuterVolumeSpecName: "kube-api-access-zhlfj") pod "b8f7698c-7da5-44c3-8d16-22c9739462e5" (UID: "b8f7698c-7da5-44c3-8d16-22c9739462e5"). InnerVolumeSpecName "kube-api-access-zhlfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.408952 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b8f7698c-7da5-44c3-8d16-22c9739462e5" (UID: "b8f7698c-7da5-44c3-8d16-22c9739462e5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.411592 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-scripts" (OuterVolumeSpecName: "scripts") pod "b8f7698c-7da5-44c3-8d16-22c9739462e5" (UID: "b8f7698c-7da5-44c3-8d16-22c9739462e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.414834 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.446553 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8f7698c-7da5-44c3-8d16-22c9739462e5" (UID: "b8f7698c-7da5-44c3-8d16-22c9739462e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.477311 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-config-data" (OuterVolumeSpecName: "config-data") pod "b8f7698c-7da5-44c3-8d16-22c9739462e5" (UID: "b8f7698c-7da5-44c3-8d16-22c9739462e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.496145 4769 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b8f7698c-7da5-44c3-8d16-22c9739462e5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.496175 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.496184 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhlfj\" (UniqueName: \"kubernetes.io/projected/b8f7698c-7da5-44c3-8d16-22c9739462e5-kube-api-access-zhlfj\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.496194 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.496202 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.496211 4769 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8f7698c-7da5-44c3-8d16-22c9739462e5-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.496219 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f7698c-7da5-44c3-8d16-22c9739462e5-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.506162 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.610972 4769 generic.go:334] "Generic (PLEG): container finished" podID="b8f7698c-7da5-44c3-8d16-22c9739462e5" containerID="0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765" exitCode=137 Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.611026 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8f7698c-7da5-44c3-8d16-22c9739462e5","Type":"ContainerDied","Data":"0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765"} Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.611050 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b8f7698c-7da5-44c3-8d16-22c9739462e5","Type":"ContainerDied","Data":"d318a17857edbdc89e7cf1995c0d7de7a05130d3e68312fe8dbb55d3c612523d"} Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.611066 4769 scope.go:117] "RemoveContainer" containerID="0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.611175 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.618221 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab255b7-d423-4c61-b8cd-b1062265fe28","Type":"ContainerStarted","Data":"63717b042644f3dd9b5df1cd962874cb14115c303628fca9fba43cfd2cc0f9fa"} Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.639050 4769 scope.go:117] "RemoveContainer" containerID="7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.645838 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.652366 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.666347 4769 scope.go:117] "RemoveContainer" containerID="0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765" Oct 06 07:33:25 crc kubenswrapper[4769]: E1006 07:33:25.667639 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765\": container with ID starting with 0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765 not found: ID does not exist" containerID="0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.667700 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765"} err="failed to get container status \"0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765\": rpc error: code = NotFound desc = could not find container \"0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765\": container with ID starting with 0a239319efc1f0b21bfe2bb49e2c17e2646b8d6070b8b5949b35942431f5c765 not found: ID does not exist" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.667734 4769 scope.go:117] "RemoveContainer" containerID="7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.668172 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 06 07:33:25 crc kubenswrapper[4769]: E1006 07:33:25.668577 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f7698c-7da5-44c3-8d16-22c9739462e5" containerName="cinder-api" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.668599 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f7698c-7da5-44c3-8d16-22c9739462e5" containerName="cinder-api" Oct 06 07:33:25 crc kubenswrapper[4769]: E1006 07:33:25.668627 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f7698c-7da5-44c3-8d16-22c9739462e5" containerName="cinder-api-log" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.668637 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f7698c-7da5-44c3-8d16-22c9739462e5" containerName="cinder-api-log" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.668857 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8f7698c-7da5-44c3-8d16-22c9739462e5" containerName="cinder-api" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.668881 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8f7698c-7da5-44c3-8d16-22c9739462e5" containerName="cinder-api-log" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.669934 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: E1006 07:33:25.672731 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a\": container with ID starting with 7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a not found: ID does not exist" containerID="7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.672787 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a"} err="failed to get container status \"7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a\": rpc error: code = NotFound desc = could not find container \"7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a\": container with ID starting with 7f732731f97c636c3d3c2c1ecf6181b03d49e17c55fc177b4f5802a15b4f9c0a not found: ID does not exist" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.673498 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.673586 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.673804 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.683990 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.807132 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-logs\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.807186 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.807482 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.807629 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-config-data-custom\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.807659 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.807692 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8pps\" (UniqueName: \"kubernetes.io/projected/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-kube-api-access-s8pps\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.807747 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-config-data\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.807777 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-scripts\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.807796 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.909783 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-logs\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.909864 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.909998 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.910055 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.910072 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-config-data-custom\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.910147 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.910277 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-logs\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.910305 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8pps\" (UniqueName: \"kubernetes.io/projected/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-kube-api-access-s8pps\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.910386 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-config-data\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.910501 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-scripts\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.910537 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.915964 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.916210 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-config-data\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.917104 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-scripts\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.917548 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.918937 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-config-data-custom\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.919258 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.937242 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8pps\" (UniqueName: \"kubernetes.io/projected/f07f5e2f-83e9-4cfe-a9b5-d372bfecc869-kube-api-access-s8pps\") pod \"cinder-api-0\" (UID: \"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869\") " pod="openstack/cinder-api-0" Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.940817 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:25 crc kubenswrapper[4769]: I1006 07:33:25.992618 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.181805 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8f7698c-7da5-44c3-8d16-22c9739462e5" path="/var/lib/kubelet/pods/b8f7698c-7da5-44c3-8d16-22c9739462e5/volumes" Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.441996 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 06 07:33:26 crc kubenswrapper[4769]: W1006 07:33:26.442879 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf07f5e2f_83e9_4cfe_a9b5_d372bfecc869.slice/crio-593ae52efc239f7f289f30b84d00549073a791cd923398b45cc218ec0ce36e2b WatchSource:0}: Error finding container 593ae52efc239f7f289f30b84d00549073a791cd923398b45cc218ec0ce36e2b: Status 404 returned error can't find the container with id 593ae52efc239f7f289f30b84d00549073a791cd923398b45cc218ec0ce36e2b Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.635263 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab255b7-d423-4c61-b8cd-b1062265fe28","Type":"ContainerStarted","Data":"e335068d0baf2c42bd6898138ef506fc6111526036bd226a86cee3d73346f133"} Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.635376 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="ceilometer-central-agent" containerID="cri-o://4dc1ba67a60adb484b59038029408f1e4c046637de2caa63b6c2a642ef2191f0" gracePeriod=30 Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.635392 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="proxy-httpd" containerID="cri-o://e335068d0baf2c42bd6898138ef506fc6111526036bd226a86cee3d73346f133" gracePeriod=30 Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.635448 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.635482 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="ceilometer-notification-agent" containerID="cri-o://b1c60753b5b222416b323400e88c56b223bce71bd7f688ad5a58c07985337862" gracePeriod=30 Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.635482 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="sg-core" containerID="cri-o://63717b042644f3dd9b5df1cd962874cb14115c303628fca9fba43cfd2cc0f9fa" gracePeriod=30 Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.645351 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869","Type":"ContainerStarted","Data":"593ae52efc239f7f289f30b84d00549073a791cd923398b45cc218ec0ce36e2b"} Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.746383 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.936583889 podStartE2EDuration="5.74634771s" podCreationTimestamp="2025-10-06 07:33:21 +0000 UTC" firstStartedPulling="2025-10-06 07:33:22.466336929 +0000 UTC m=+998.990618076" lastFinishedPulling="2025-10-06 07:33:26.27610073 +0000 UTC m=+1002.800381897" observedRunningTime="2025-10-06 07:33:26.667148125 +0000 UTC m=+1003.191429272" watchObservedRunningTime="2025-10-06 07:33:26.74634771 +0000 UTC m=+1003.270628857" Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.751207 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-l5m6l"] Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.752272 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-l5m6l" Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.769989 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-l5m6l"] Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.825328 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfphr\" (UniqueName: \"kubernetes.io/projected/36a24429-6273-4103-a2b6-0e570b0bd532-kube-api-access-jfphr\") pod \"nova-api-db-create-l5m6l\" (UID: \"36a24429-6273-4103-a2b6-0e570b0bd532\") " pod="openstack/nova-api-db-create-l5m6l" Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.895466 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-96clh"] Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.896826 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-96clh" Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.951076 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfphr\" (UniqueName: \"kubernetes.io/projected/36a24429-6273-4103-a2b6-0e570b0bd532-kube-api-access-jfphr\") pod \"nova-api-db-create-l5m6l\" (UID: \"36a24429-6273-4103-a2b6-0e570b0bd532\") " pod="openstack/nova-api-db-create-l5m6l" Oct 06 07:33:26 crc kubenswrapper[4769]: I1006 07:33:26.952169 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-96clh"] Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.007009 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfphr\" (UniqueName: \"kubernetes.io/projected/36a24429-6273-4103-a2b6-0e570b0bd532-kube-api-access-jfphr\") pod \"nova-api-db-create-l5m6l\" (UID: \"36a24429-6273-4103-a2b6-0e570b0bd532\") " pod="openstack/nova-api-db-create-l5m6l" Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.055505 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktl9f\" (UniqueName: \"kubernetes.io/projected/a91c7119-38b5-427e-a20a-010e2fc8b788-kube-api-access-ktl9f\") pod \"nova-cell0-db-create-96clh\" (UID: \"a91c7119-38b5-427e-a20a-010e2fc8b788\") " pod="openstack/nova-cell0-db-create-96clh" Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.075510 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8w99c"] Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.076704 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8w99c" Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.085684 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-l5m6l" Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.090740 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8w99c"] Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.160232 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnrph\" (UniqueName: \"kubernetes.io/projected/870fca03-ee61-4dfc-b9d2-53e3f1ff3694-kube-api-access-vnrph\") pod \"nova-cell1-db-create-8w99c\" (UID: \"870fca03-ee61-4dfc-b9d2-53e3f1ff3694\") " pod="openstack/nova-cell1-db-create-8w99c" Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.160372 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktl9f\" (UniqueName: \"kubernetes.io/projected/a91c7119-38b5-427e-a20a-010e2fc8b788-kube-api-access-ktl9f\") pod \"nova-cell0-db-create-96clh\" (UID: \"a91c7119-38b5-427e-a20a-010e2fc8b788\") " pod="openstack/nova-cell0-db-create-96clh" Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.178787 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktl9f\" (UniqueName: \"kubernetes.io/projected/a91c7119-38b5-427e-a20a-010e2fc8b788-kube-api-access-ktl9f\") pod \"nova-cell0-db-create-96clh\" (UID: \"a91c7119-38b5-427e-a20a-010e2fc8b788\") " pod="openstack/nova-cell0-db-create-96clh" Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.261867 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-96clh" Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.262344 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnrph\" (UniqueName: \"kubernetes.io/projected/870fca03-ee61-4dfc-b9d2-53e3f1ff3694-kube-api-access-vnrph\") pod \"nova-cell1-db-create-8w99c\" (UID: \"870fca03-ee61-4dfc-b9d2-53e3f1ff3694\") " pod="openstack/nova-cell1-db-create-8w99c" Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.279941 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnrph\" (UniqueName: \"kubernetes.io/projected/870fca03-ee61-4dfc-b9d2-53e3f1ff3694-kube-api-access-vnrph\") pod \"nova-cell1-db-create-8w99c\" (UID: \"870fca03-ee61-4dfc-b9d2-53e3f1ff3694\") " pod="openstack/nova-cell1-db-create-8w99c" Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.414397 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8w99c" Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.575592 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-l5m6l"] Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.700390 4769 generic.go:334] "Generic (PLEG): container finished" podID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerID="e335068d0baf2c42bd6898138ef506fc6111526036bd226a86cee3d73346f133" exitCode=0 Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.700438 4769 generic.go:334] "Generic (PLEG): container finished" podID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerID="63717b042644f3dd9b5df1cd962874cb14115c303628fca9fba43cfd2cc0f9fa" exitCode=2 Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.700454 4769 generic.go:334] "Generic (PLEG): container finished" podID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerID="b1c60753b5b222416b323400e88c56b223bce71bd7f688ad5a58c07985337862" exitCode=0 Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.700498 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab255b7-d423-4c61-b8cd-b1062265fe28","Type":"ContainerDied","Data":"e335068d0baf2c42bd6898138ef506fc6111526036bd226a86cee3d73346f133"} Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.700534 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab255b7-d423-4c61-b8cd-b1062265fe28","Type":"ContainerDied","Data":"63717b042644f3dd9b5df1cd962874cb14115c303628fca9fba43cfd2cc0f9fa"} Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.700549 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab255b7-d423-4c61-b8cd-b1062265fe28","Type":"ContainerDied","Data":"b1c60753b5b222416b323400e88c56b223bce71bd7f688ad5a58c07985337862"} Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.707107 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869","Type":"ContainerStarted","Data":"72421b764adc050d17495b9126994df859d569d07d0aa66920fdd9a0ba75a74a"} Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.710950 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-l5m6l" event={"ID":"36a24429-6273-4103-a2b6-0e570b0bd532","Type":"ContainerStarted","Data":"d2f3748d497c022c6a3018bf15c1abd9977f7f7252d0ff844123220bc76699a2"} Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.825899 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-96clh"] Oct 06 07:33:27 crc kubenswrapper[4769]: W1006 07:33:27.836680 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda91c7119_38b5_427e_a20a_010e2fc8b788.slice/crio-016ad6ebf336efd2f5d3531b6c9a7bd52c3f673b309035d8e092308dd4433b3e WatchSource:0}: Error finding container 016ad6ebf336efd2f5d3531b6c9a7bd52c3f673b309035d8e092308dd4433b3e: Status 404 returned error can't find the container with id 016ad6ebf336efd2f5d3531b6c9a7bd52c3f673b309035d8e092308dd4433b3e Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.877009 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 06 07:33:27 crc kubenswrapper[4769]: I1006 07:33:27.956213 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8w99c"] Oct 06 07:33:28 crc kubenswrapper[4769]: I1006 07:33:28.723597 4769 generic.go:334] "Generic (PLEG): container finished" podID="a91c7119-38b5-427e-a20a-010e2fc8b788" containerID="c00d6bac6380024a3b5ffe5ef9657d09f1d19ea67f9bf864c106c4b32b14d3ac" exitCode=0 Oct 06 07:33:28 crc kubenswrapper[4769]: I1006 07:33:28.723750 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-96clh" event={"ID":"a91c7119-38b5-427e-a20a-010e2fc8b788","Type":"ContainerDied","Data":"c00d6bac6380024a3b5ffe5ef9657d09f1d19ea67f9bf864c106c4b32b14d3ac"} Oct 06 07:33:28 crc kubenswrapper[4769]: I1006 07:33:28.724002 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-96clh" event={"ID":"a91c7119-38b5-427e-a20a-010e2fc8b788","Type":"ContainerStarted","Data":"016ad6ebf336efd2f5d3531b6c9a7bd52c3f673b309035d8e092308dd4433b3e"} Oct 06 07:33:28 crc kubenswrapper[4769]: I1006 07:33:28.726664 4769 generic.go:334] "Generic (PLEG): container finished" podID="36a24429-6273-4103-a2b6-0e570b0bd532" containerID="44d7dcae3bf3d57c11c33ed7807a2e948a246505c1ecceb458b8fd040ea7a960" exitCode=0 Oct 06 07:33:28 crc kubenswrapper[4769]: I1006 07:33:28.726721 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-l5m6l" event={"ID":"36a24429-6273-4103-a2b6-0e570b0bd532","Type":"ContainerDied","Data":"44d7dcae3bf3d57c11c33ed7807a2e948a246505c1ecceb458b8fd040ea7a960"} Oct 06 07:33:28 crc kubenswrapper[4769]: I1006 07:33:28.729739 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f07f5e2f-83e9-4cfe-a9b5-d372bfecc869","Type":"ContainerStarted","Data":"d7bc06fe51acda1e4b4cce63127ad26c281d4f16342132aa7604fb2ae7b812e7"} Oct 06 07:33:28 crc kubenswrapper[4769]: I1006 07:33:28.730740 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 06 07:33:28 crc kubenswrapper[4769]: I1006 07:33:28.734273 4769 generic.go:334] "Generic (PLEG): container finished" podID="870fca03-ee61-4dfc-b9d2-53e3f1ff3694" containerID="aff6c35585f688a58e3558544917702eff0660dbe2ec31f2f2692f03ff86994d" exitCode=0 Oct 06 07:33:28 crc kubenswrapper[4769]: I1006 07:33:28.734309 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8w99c" event={"ID":"870fca03-ee61-4dfc-b9d2-53e3f1ff3694","Type":"ContainerDied","Data":"aff6c35585f688a58e3558544917702eff0660dbe2ec31f2f2692f03ff86994d"} Oct 06 07:33:28 crc kubenswrapper[4769]: I1006 07:33:28.734329 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8w99c" event={"ID":"870fca03-ee61-4dfc-b9d2-53e3f1ff3694","Type":"ContainerStarted","Data":"f39d837f8c7753f2b50a6b7d2134716d6382ffd4b9c44f22c24ff54c1fb9876f"} Oct 06 07:33:28 crc kubenswrapper[4769]: I1006 07:33:28.808662 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.808645617 podStartE2EDuration="3.808645617s" podCreationTimestamp="2025-10-06 07:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:33:28.795401275 +0000 UTC m=+1005.319682422" watchObservedRunningTime="2025-10-06 07:33:28.808645617 +0000 UTC m=+1005.332926764" Oct 06 07:33:29 crc kubenswrapper[4769]: I1006 07:33:29.796071 4769 generic.go:334] "Generic (PLEG): container finished" podID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerID="4dc1ba67a60adb484b59038029408f1e4c046637de2caa63b6c2a642ef2191f0" exitCode=0 Oct 06 07:33:29 crc kubenswrapper[4769]: I1006 07:33:29.796591 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab255b7-d423-4c61-b8cd-b1062265fe28","Type":"ContainerDied","Data":"4dc1ba67a60adb484b59038029408f1e4c046637de2caa63b6c2a642ef2191f0"} Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.193737 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.317070 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j95vq\" (UniqueName: \"kubernetes.io/projected/dab255b7-d423-4c61-b8cd-b1062265fe28-kube-api-access-j95vq\") pod \"dab255b7-d423-4c61-b8cd-b1062265fe28\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.317116 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab255b7-d423-4c61-b8cd-b1062265fe28-run-httpd\") pod \"dab255b7-d423-4c61-b8cd-b1062265fe28\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.317163 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-sg-core-conf-yaml\") pod \"dab255b7-d423-4c61-b8cd-b1062265fe28\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.317185 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-config-data\") pod \"dab255b7-d423-4c61-b8cd-b1062265fe28\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.317269 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab255b7-d423-4c61-b8cd-b1062265fe28-log-httpd\") pod \"dab255b7-d423-4c61-b8cd-b1062265fe28\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.317294 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-scripts\") pod \"dab255b7-d423-4c61-b8cd-b1062265fe28\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.317347 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-combined-ca-bundle\") pod \"dab255b7-d423-4c61-b8cd-b1062265fe28\" (UID: \"dab255b7-d423-4c61-b8cd-b1062265fe28\") " Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.317842 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab255b7-d423-4c61-b8cd-b1062265fe28-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dab255b7-d423-4c61-b8cd-b1062265fe28" (UID: "dab255b7-d423-4c61-b8cd-b1062265fe28"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.317873 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dab255b7-d423-4c61-b8cd-b1062265fe28-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dab255b7-d423-4c61-b8cd-b1062265fe28" (UID: "dab255b7-d423-4c61-b8cd-b1062265fe28"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.322658 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dab255b7-d423-4c61-b8cd-b1062265fe28-kube-api-access-j95vq" (OuterVolumeSpecName: "kube-api-access-j95vq") pod "dab255b7-d423-4c61-b8cd-b1062265fe28" (UID: "dab255b7-d423-4c61-b8cd-b1062265fe28"). InnerVolumeSpecName "kube-api-access-j95vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.323539 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-scripts" (OuterVolumeSpecName: "scripts") pod "dab255b7-d423-4c61-b8cd-b1062265fe28" (UID: "dab255b7-d423-4c61-b8cd-b1062265fe28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.343358 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dab255b7-d423-4c61-b8cd-b1062265fe28" (UID: "dab255b7-d423-4c61-b8cd-b1062265fe28"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.386470 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-96clh" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.388005 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dab255b7-d423-4c61-b8cd-b1062265fe28" (UID: "dab255b7-d423-4c61-b8cd-b1062265fe28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.395134 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8w99c" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.405029 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-l5m6l" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.419583 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j95vq\" (UniqueName: \"kubernetes.io/projected/dab255b7-d423-4c61-b8cd-b1062265fe28-kube-api-access-j95vq\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.419606 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab255b7-d423-4c61-b8cd-b1062265fe28-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.419617 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.419626 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dab255b7-d423-4c61-b8cd-b1062265fe28-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.419635 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.419662 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.426843 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-config-data" (OuterVolumeSpecName: "config-data") pod "dab255b7-d423-4c61-b8cd-b1062265fe28" (UID: "dab255b7-d423-4c61-b8cd-b1062265fe28"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.520386 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktl9f\" (UniqueName: \"kubernetes.io/projected/a91c7119-38b5-427e-a20a-010e2fc8b788-kube-api-access-ktl9f\") pod \"a91c7119-38b5-427e-a20a-010e2fc8b788\" (UID: \"a91c7119-38b5-427e-a20a-010e2fc8b788\") " Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.520553 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnrph\" (UniqueName: \"kubernetes.io/projected/870fca03-ee61-4dfc-b9d2-53e3f1ff3694-kube-api-access-vnrph\") pod \"870fca03-ee61-4dfc-b9d2-53e3f1ff3694\" (UID: \"870fca03-ee61-4dfc-b9d2-53e3f1ff3694\") " Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.520639 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfphr\" (UniqueName: \"kubernetes.io/projected/36a24429-6273-4103-a2b6-0e570b0bd532-kube-api-access-jfphr\") pod \"36a24429-6273-4103-a2b6-0e570b0bd532\" (UID: \"36a24429-6273-4103-a2b6-0e570b0bd532\") " Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.521073 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dab255b7-d423-4c61-b8cd-b1062265fe28-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.523588 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a91c7119-38b5-427e-a20a-010e2fc8b788-kube-api-access-ktl9f" (OuterVolumeSpecName: "kube-api-access-ktl9f") pod "a91c7119-38b5-427e-a20a-010e2fc8b788" (UID: "a91c7119-38b5-427e-a20a-010e2fc8b788"). InnerVolumeSpecName "kube-api-access-ktl9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.525303 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a24429-6273-4103-a2b6-0e570b0bd532-kube-api-access-jfphr" (OuterVolumeSpecName: "kube-api-access-jfphr") pod "36a24429-6273-4103-a2b6-0e570b0bd532" (UID: "36a24429-6273-4103-a2b6-0e570b0bd532"). InnerVolumeSpecName "kube-api-access-jfphr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.527616 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870fca03-ee61-4dfc-b9d2-53e3f1ff3694-kube-api-access-vnrph" (OuterVolumeSpecName: "kube-api-access-vnrph") pod "870fca03-ee61-4dfc-b9d2-53e3f1ff3694" (UID: "870fca03-ee61-4dfc-b9d2-53e3f1ff3694"). InnerVolumeSpecName "kube-api-access-vnrph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.622670 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktl9f\" (UniqueName: \"kubernetes.io/projected/a91c7119-38b5-427e-a20a-010e2fc8b788-kube-api-access-ktl9f\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.622705 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnrph\" (UniqueName: \"kubernetes.io/projected/870fca03-ee61-4dfc-b9d2-53e3f1ff3694-kube-api-access-vnrph\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.622716 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfphr\" (UniqueName: \"kubernetes.io/projected/36a24429-6273-4103-a2b6-0e570b0bd532-kube-api-access-jfphr\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.805990 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-96clh" event={"ID":"a91c7119-38b5-427e-a20a-010e2fc8b788","Type":"ContainerDied","Data":"016ad6ebf336efd2f5d3531b6c9a7bd52c3f673b309035d8e092308dd4433b3e"} Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.806043 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="016ad6ebf336efd2f5d3531b6c9a7bd52c3f673b309035d8e092308dd4433b3e" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.806004 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-96clh" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.807336 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-l5m6l" event={"ID":"36a24429-6273-4103-a2b6-0e570b0bd532","Type":"ContainerDied","Data":"d2f3748d497c022c6a3018bf15c1abd9977f7f7252d0ff844123220bc76699a2"} Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.807392 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2f3748d497c022c6a3018bf15c1abd9977f7f7252d0ff844123220bc76699a2" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.807396 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-l5m6l" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.810352 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dab255b7-d423-4c61-b8cd-b1062265fe28","Type":"ContainerDied","Data":"5c8a49c2e78fe169eff81ad881daa342e0af034f0e47337168027811882d8978"} Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.810391 4769 scope.go:117] "RemoveContainer" containerID="e335068d0baf2c42bd6898138ef506fc6111526036bd226a86cee3d73346f133" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.810406 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.813177 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8w99c" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.813197 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8w99c" event={"ID":"870fca03-ee61-4dfc-b9d2-53e3f1ff3694","Type":"ContainerDied","Data":"f39d837f8c7753f2b50a6b7d2134716d6382ffd4b9c44f22c24ff54c1fb9876f"} Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.813235 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f39d837f8c7753f2b50a6b7d2134716d6382ffd4b9c44f22c24ff54c1fb9876f" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.853631 4769 scope.go:117] "RemoveContainer" containerID="63717b042644f3dd9b5df1cd962874cb14115c303628fca9fba43cfd2cc0f9fa" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.882454 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.882737 4769 scope.go:117] "RemoveContainer" containerID="b1c60753b5b222416b323400e88c56b223bce71bd7f688ad5a58c07985337862" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.888066 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.909893 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:30 crc kubenswrapper[4769]: E1006 07:33:30.910651 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="sg-core" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.910677 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="sg-core" Oct 06 07:33:30 crc kubenswrapper[4769]: E1006 07:33:30.910695 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870fca03-ee61-4dfc-b9d2-53e3f1ff3694" containerName="mariadb-database-create" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.910704 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="870fca03-ee61-4dfc-b9d2-53e3f1ff3694" containerName="mariadb-database-create" Oct 06 07:33:30 crc kubenswrapper[4769]: E1006 07:33:30.910725 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a91c7119-38b5-427e-a20a-010e2fc8b788" containerName="mariadb-database-create" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.910734 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a91c7119-38b5-427e-a20a-010e2fc8b788" containerName="mariadb-database-create" Oct 06 07:33:30 crc kubenswrapper[4769]: E1006 07:33:30.910758 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="proxy-httpd" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.910767 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="proxy-httpd" Oct 06 07:33:30 crc kubenswrapper[4769]: E1006 07:33:30.910787 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="ceilometer-central-agent" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.910796 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="ceilometer-central-agent" Oct 06 07:33:30 crc kubenswrapper[4769]: E1006 07:33:30.910805 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="ceilometer-notification-agent" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.910814 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="ceilometer-notification-agent" Oct 06 07:33:30 crc kubenswrapper[4769]: E1006 07:33:30.910823 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a24429-6273-4103-a2b6-0e570b0bd532" containerName="mariadb-database-create" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.910831 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a24429-6273-4103-a2b6-0e570b0bd532" containerName="mariadb-database-create" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.911090 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="ceilometer-notification-agent" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.911119 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a91c7119-38b5-427e-a20a-010e2fc8b788" containerName="mariadb-database-create" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.911132 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="870fca03-ee61-4dfc-b9d2-53e3f1ff3694" containerName="mariadb-database-create" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.911142 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="ceilometer-central-agent" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.911159 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="sg-core" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.911172 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" containerName="proxy-httpd" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.911193 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a24429-6273-4103-a2b6-0e570b0bd532" containerName="mariadb-database-create" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.917026 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.920154 4769 scope.go:117] "RemoveContainer" containerID="4dc1ba67a60adb484b59038029408f1e4c046637de2caa63b6c2a642ef2191f0" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.927536 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.928016 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 07:33:30 crc kubenswrapper[4769]: I1006 07:33:30.929983 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.032725 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.032784 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c81ea12-eef8-4593-9181-078ce593c881-log-httpd\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.032822 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzgtr\" (UniqueName: \"kubernetes.io/projected/9c81ea12-eef8-4593-9181-078ce593c881-kube-api-access-vzgtr\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.032839 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-scripts\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.032872 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-config-data\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.032894 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.032914 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c81ea12-eef8-4593-9181-078ce593c881-run-httpd\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.134347 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.134404 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c81ea12-eef8-4593-9181-078ce593c881-run-httpd\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.134587 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.134636 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c81ea12-eef8-4593-9181-078ce593c881-log-httpd\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.134683 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzgtr\" (UniqueName: \"kubernetes.io/projected/9c81ea12-eef8-4593-9181-078ce593c881-kube-api-access-vzgtr\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.134703 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-scripts\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.134747 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-config-data\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.134996 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c81ea12-eef8-4593-9181-078ce593c881-run-httpd\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.135530 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c81ea12-eef8-4593-9181-078ce593c881-log-httpd\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.140019 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-scripts\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.140246 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.140367 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-config-data\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.152873 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.162473 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzgtr\" (UniqueName: \"kubernetes.io/projected/9c81ea12-eef8-4593-9181-078ce593c881-kube-api-access-vzgtr\") pod \"ceilometer-0\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.255716 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.736555 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:31 crc kubenswrapper[4769]: W1006 07:33:31.750156 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c81ea12_eef8_4593_9181_078ce593c881.slice/crio-0cf275d58ea142055ea03df777b8da1b02f5717aa1d4f7a5e26fa8a9f7df2479 WatchSource:0}: Error finding container 0cf275d58ea142055ea03df777b8da1b02f5717aa1d4f7a5e26fa8a9f7df2479: Status 404 returned error can't find the container with id 0cf275d58ea142055ea03df777b8da1b02f5717aa1d4f7a5e26fa8a9f7df2479 Oct 06 07:33:31 crc kubenswrapper[4769]: I1006 07:33:31.825075 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c81ea12-eef8-4593-9181-078ce593c881","Type":"ContainerStarted","Data":"0cf275d58ea142055ea03df777b8da1b02f5717aa1d4f7a5e26fa8a9f7df2479"} Oct 06 07:33:32 crc kubenswrapper[4769]: I1006 07:33:32.180226 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dab255b7-d423-4c61-b8cd-b1062265fe28" path="/var/lib/kubelet/pods/dab255b7-d423-4c61-b8cd-b1062265fe28/volumes" Oct 06 07:33:32 crc kubenswrapper[4769]: I1006 07:33:32.832606 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c81ea12-eef8-4593-9181-078ce593c881","Type":"ContainerStarted","Data":"2f9b712ea83532a44315216eb69cb48f479b8c58f920d04357e5d791746cbf6d"} Oct 06 07:33:32 crc kubenswrapper[4769]: I1006 07:33:32.832908 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c81ea12-eef8-4593-9181-078ce593c881","Type":"ContainerStarted","Data":"0b1ff6e4f5d576d6b8114814ef34b4c06e896dbc41f8512926a0b971f5160ada"} Oct 06 07:33:33 crc kubenswrapper[4769]: I1006 07:33:33.842129 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c81ea12-eef8-4593-9181-078ce593c881","Type":"ContainerStarted","Data":"c64228d7755702af89cb6ec9e234228cec7e5e162e2f76876e2e2cb0b585bd9c"} Oct 06 07:33:35 crc kubenswrapper[4769]: I1006 07:33:35.874063 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c81ea12-eef8-4593-9181-078ce593c881","Type":"ContainerStarted","Data":"81765bb123a39eecb5ac14cd78af8ebb8165fa49c4a70bc5c4b8384cb8ebdf41"} Oct 06 07:33:35 crc kubenswrapper[4769]: I1006 07:33:35.874695 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 07:33:35 crc kubenswrapper[4769]: I1006 07:33:35.900358 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.989739194 podStartE2EDuration="5.900333293s" podCreationTimestamp="2025-10-06 07:33:30 +0000 UTC" firstStartedPulling="2025-10-06 07:33:31.753229584 +0000 UTC m=+1008.277510731" lastFinishedPulling="2025-10-06 07:33:34.663823683 +0000 UTC m=+1011.188104830" observedRunningTime="2025-10-06 07:33:35.892598432 +0000 UTC m=+1012.416879579" watchObservedRunningTime="2025-10-06 07:33:35.900333293 +0000 UTC m=+1012.424614440" Oct 06 07:33:36 crc kubenswrapper[4769]: I1006 07:33:36.914183 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-51fe-account-create-bxxcd"] Oct 06 07:33:36 crc kubenswrapper[4769]: I1006 07:33:36.916411 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-51fe-account-create-bxxcd" Oct 06 07:33:36 crc kubenswrapper[4769]: I1006 07:33:36.918710 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 06 07:33:36 crc kubenswrapper[4769]: I1006 07:33:36.940895 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-51fe-account-create-bxxcd"] Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.044411 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvd2k\" (UniqueName: \"kubernetes.io/projected/797de656-8644-4e69-9bd9-a568002a8413-kube-api-access-fvd2k\") pod \"nova-api-51fe-account-create-bxxcd\" (UID: \"797de656-8644-4e69-9bd9-a568002a8413\") " pod="openstack/nova-api-51fe-account-create-bxxcd" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.094697 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-c2d7-account-create-wqxvp"] Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.096682 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c2d7-account-create-wqxvp" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.098758 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.143659 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c2d7-account-create-wqxvp"] Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.146270 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvd2k\" (UniqueName: \"kubernetes.io/projected/797de656-8644-4e69-9bd9-a568002a8413-kube-api-access-fvd2k\") pod \"nova-api-51fe-account-create-bxxcd\" (UID: \"797de656-8644-4e69-9bd9-a568002a8413\") " pod="openstack/nova-api-51fe-account-create-bxxcd" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.164609 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvd2k\" (UniqueName: \"kubernetes.io/projected/797de656-8644-4e69-9bd9-a568002a8413-kube-api-access-fvd2k\") pod \"nova-api-51fe-account-create-bxxcd\" (UID: \"797de656-8644-4e69-9bd9-a568002a8413\") " pod="openstack/nova-api-51fe-account-create-bxxcd" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.247697 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm6rc\" (UniqueName: \"kubernetes.io/projected/1dfbf184-0c47-4c5b-aa47-70d7605b437b-kube-api-access-cm6rc\") pod \"nova-cell0-c2d7-account-create-wqxvp\" (UID: \"1dfbf184-0c47-4c5b-aa47-70d7605b437b\") " pod="openstack/nova-cell0-c2d7-account-create-wqxvp" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.261333 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-51fe-account-create-bxxcd" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.350480 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm6rc\" (UniqueName: \"kubernetes.io/projected/1dfbf184-0c47-4c5b-aa47-70d7605b437b-kube-api-access-cm6rc\") pod \"nova-cell0-c2d7-account-create-wqxvp\" (UID: \"1dfbf184-0c47-4c5b-aa47-70d7605b437b\") " pod="openstack/nova-cell0-c2d7-account-create-wqxvp" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.397387 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e12c-account-create-skw9b"] Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.398534 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e12c-account-create-skw9b" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.402922 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm6rc\" (UniqueName: \"kubernetes.io/projected/1dfbf184-0c47-4c5b-aa47-70d7605b437b-kube-api-access-cm6rc\") pod \"nova-cell0-c2d7-account-create-wqxvp\" (UID: \"1dfbf184-0c47-4c5b-aa47-70d7605b437b\") " pod="openstack/nova-cell0-c2d7-account-create-wqxvp" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.412923 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c2d7-account-create-wqxvp" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.416629 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.428289 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e12c-account-create-skw9b"] Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.452066 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsv52\" (UniqueName: \"kubernetes.io/projected/d09c3d22-3073-474f-8465-db74ea316281-kube-api-access-bsv52\") pod \"nova-cell1-e12c-account-create-skw9b\" (UID: \"d09c3d22-3073-474f-8465-db74ea316281\") " pod="openstack/nova-cell1-e12c-account-create-skw9b" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.553269 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsv52\" (UniqueName: \"kubernetes.io/projected/d09c3d22-3073-474f-8465-db74ea316281-kube-api-access-bsv52\") pod \"nova-cell1-e12c-account-create-skw9b\" (UID: \"d09c3d22-3073-474f-8465-db74ea316281\") " pod="openstack/nova-cell1-e12c-account-create-skw9b" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.578147 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsv52\" (UniqueName: \"kubernetes.io/projected/d09c3d22-3073-474f-8465-db74ea316281-kube-api-access-bsv52\") pod \"nova-cell1-e12c-account-create-skw9b\" (UID: \"d09c3d22-3073-474f-8465-db74ea316281\") " pod="openstack/nova-cell1-e12c-account-create-skw9b" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.866810 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e12c-account-create-skw9b" Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.908247 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-51fe-account-create-bxxcd"] Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.964243 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c2d7-account-create-wqxvp"] Oct 06 07:33:37 crc kubenswrapper[4769]: I1006 07:33:37.991440 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 06 07:33:38 crc kubenswrapper[4769]: W1006 07:33:38.327683 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd09c3d22_3073_474f_8465_db74ea316281.slice/crio-0d3be8a31b77814533299a86a64d71c6d5b53edd88cdeecd9f3b569ab96717e1 WatchSource:0}: Error finding container 0d3be8a31b77814533299a86a64d71c6d5b53edd88cdeecd9f3b569ab96717e1: Status 404 returned error can't find the container with id 0d3be8a31b77814533299a86a64d71c6d5b53edd88cdeecd9f3b569ab96717e1 Oct 06 07:33:38 crc kubenswrapper[4769]: I1006 07:33:38.328736 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e12c-account-create-skw9b"] Oct 06 07:33:38 crc kubenswrapper[4769]: I1006 07:33:38.903810 4769 generic.go:334] "Generic (PLEG): container finished" podID="797de656-8644-4e69-9bd9-a568002a8413" containerID="34c0a60d91a80bffcae5aa0025fcf58100e9696827a3daa874892d102d47fd2c" exitCode=0 Oct 06 07:33:38 crc kubenswrapper[4769]: I1006 07:33:38.904117 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-51fe-account-create-bxxcd" event={"ID":"797de656-8644-4e69-9bd9-a568002a8413","Type":"ContainerDied","Data":"34c0a60d91a80bffcae5aa0025fcf58100e9696827a3daa874892d102d47fd2c"} Oct 06 07:33:38 crc kubenswrapper[4769]: I1006 07:33:38.904140 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-51fe-account-create-bxxcd" event={"ID":"797de656-8644-4e69-9bd9-a568002a8413","Type":"ContainerStarted","Data":"24c485dd86774edadc58e241d7064fd3957059e2348ccebfe2cb80f1f86a31a6"} Oct 06 07:33:38 crc kubenswrapper[4769]: I1006 07:33:38.905778 4769 generic.go:334] "Generic (PLEG): container finished" podID="d09c3d22-3073-474f-8465-db74ea316281" containerID="cc3df954b8ad3066e81b6b1e4123c8a3ea7fed55652ea0f3c1a149b02c183608" exitCode=0 Oct 06 07:33:38 crc kubenswrapper[4769]: I1006 07:33:38.905815 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e12c-account-create-skw9b" event={"ID":"d09c3d22-3073-474f-8465-db74ea316281","Type":"ContainerDied","Data":"cc3df954b8ad3066e81b6b1e4123c8a3ea7fed55652ea0f3c1a149b02c183608"} Oct 06 07:33:38 crc kubenswrapper[4769]: I1006 07:33:38.905829 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e12c-account-create-skw9b" event={"ID":"d09c3d22-3073-474f-8465-db74ea316281","Type":"ContainerStarted","Data":"0d3be8a31b77814533299a86a64d71c6d5b53edd88cdeecd9f3b569ab96717e1"} Oct 06 07:33:38 crc kubenswrapper[4769]: I1006 07:33:38.907514 4769 generic.go:334] "Generic (PLEG): container finished" podID="1dfbf184-0c47-4c5b-aa47-70d7605b437b" containerID="74912788c10dd40d7c54002e535982f59c4d05f2f3d25404572e66686d339ad0" exitCode=0 Oct 06 07:33:38 crc kubenswrapper[4769]: I1006 07:33:38.907535 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c2d7-account-create-wqxvp" event={"ID":"1dfbf184-0c47-4c5b-aa47-70d7605b437b","Type":"ContainerDied","Data":"74912788c10dd40d7c54002e535982f59c4d05f2f3d25404572e66686d339ad0"} Oct 06 07:33:38 crc kubenswrapper[4769]: I1006 07:33:38.907547 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c2d7-account-create-wqxvp" event={"ID":"1dfbf184-0c47-4c5b-aa47-70d7605b437b","Type":"ContainerStarted","Data":"adcf25606bccb6902302b0d375ecaede3adfbbcec93453fb4157414e128696b7"} Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.463326 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-51fe-account-create-bxxcd" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.468376 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c2d7-account-create-wqxvp" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.476877 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e12c-account-create-skw9b" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.505056 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvd2k\" (UniqueName: \"kubernetes.io/projected/797de656-8644-4e69-9bd9-a568002a8413-kube-api-access-fvd2k\") pod \"797de656-8644-4e69-9bd9-a568002a8413\" (UID: \"797de656-8644-4e69-9bd9-a568002a8413\") " Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.505154 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm6rc\" (UniqueName: \"kubernetes.io/projected/1dfbf184-0c47-4c5b-aa47-70d7605b437b-kube-api-access-cm6rc\") pod \"1dfbf184-0c47-4c5b-aa47-70d7605b437b\" (UID: \"1dfbf184-0c47-4c5b-aa47-70d7605b437b\") " Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.505215 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsv52\" (UniqueName: \"kubernetes.io/projected/d09c3d22-3073-474f-8465-db74ea316281-kube-api-access-bsv52\") pod \"d09c3d22-3073-474f-8465-db74ea316281\" (UID: \"d09c3d22-3073-474f-8465-db74ea316281\") " Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.514414 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09c3d22-3073-474f-8465-db74ea316281-kube-api-access-bsv52" (OuterVolumeSpecName: "kube-api-access-bsv52") pod "d09c3d22-3073-474f-8465-db74ea316281" (UID: "d09c3d22-3073-474f-8465-db74ea316281"). InnerVolumeSpecName "kube-api-access-bsv52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.514614 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797de656-8644-4e69-9bd9-a568002a8413-kube-api-access-fvd2k" (OuterVolumeSpecName: "kube-api-access-fvd2k") pod "797de656-8644-4e69-9bd9-a568002a8413" (UID: "797de656-8644-4e69-9bd9-a568002a8413"). InnerVolumeSpecName "kube-api-access-fvd2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.516546 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfbf184-0c47-4c5b-aa47-70d7605b437b-kube-api-access-cm6rc" (OuterVolumeSpecName: "kube-api-access-cm6rc") pod "1dfbf184-0c47-4c5b-aa47-70d7605b437b" (UID: "1dfbf184-0c47-4c5b-aa47-70d7605b437b"). InnerVolumeSpecName "kube-api-access-cm6rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.607569 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm6rc\" (UniqueName: \"kubernetes.io/projected/1dfbf184-0c47-4c5b-aa47-70d7605b437b-kube-api-access-cm6rc\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.607606 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsv52\" (UniqueName: \"kubernetes.io/projected/d09c3d22-3073-474f-8465-db74ea316281-kube-api-access-bsv52\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.607618 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvd2k\" (UniqueName: \"kubernetes.io/projected/797de656-8644-4e69-9bd9-a568002a8413-kube-api-access-fvd2k\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.923839 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c2d7-account-create-wqxvp" event={"ID":"1dfbf184-0c47-4c5b-aa47-70d7605b437b","Type":"ContainerDied","Data":"adcf25606bccb6902302b0d375ecaede3adfbbcec93453fb4157414e128696b7"} Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.923866 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c2d7-account-create-wqxvp" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.923881 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adcf25606bccb6902302b0d375ecaede3adfbbcec93453fb4157414e128696b7" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.925445 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-51fe-account-create-bxxcd" event={"ID":"797de656-8644-4e69-9bd9-a568002a8413","Type":"ContainerDied","Data":"24c485dd86774edadc58e241d7064fd3957059e2348ccebfe2cb80f1f86a31a6"} Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.925468 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24c485dd86774edadc58e241d7064fd3957059e2348ccebfe2cb80f1f86a31a6" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.925495 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-51fe-account-create-bxxcd" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.926768 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e12c-account-create-skw9b" event={"ID":"d09c3d22-3073-474f-8465-db74ea316281","Type":"ContainerDied","Data":"0d3be8a31b77814533299a86a64d71c6d5b53edd88cdeecd9f3b569ab96717e1"} Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.926788 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d3be8a31b77814533299a86a64d71c6d5b53edd88cdeecd9f3b569ab96717e1" Oct 06 07:33:40 crc kubenswrapper[4769]: I1006 07:33:40.926818 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e12c-account-create-skw9b" Oct 06 07:33:41 crc kubenswrapper[4769]: I1006 07:33:41.626099 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:41 crc kubenswrapper[4769]: I1006 07:33:41.626345 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="ceilometer-central-agent" containerID="cri-o://0b1ff6e4f5d576d6b8114814ef34b4c06e896dbc41f8512926a0b971f5160ada" gracePeriod=30 Oct 06 07:33:41 crc kubenswrapper[4769]: I1006 07:33:41.626731 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="proxy-httpd" containerID="cri-o://81765bb123a39eecb5ac14cd78af8ebb8165fa49c4a70bc5c4b8384cb8ebdf41" gracePeriod=30 Oct 06 07:33:41 crc kubenswrapper[4769]: I1006 07:33:41.626778 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="sg-core" containerID="cri-o://c64228d7755702af89cb6ec9e234228cec7e5e162e2f76876e2e2cb0b585bd9c" gracePeriod=30 Oct 06 07:33:41 crc kubenswrapper[4769]: I1006 07:33:41.626808 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="ceilometer-notification-agent" containerID="cri-o://2f9b712ea83532a44315216eb69cb48f479b8c58f920d04357e5d791746cbf6d" gracePeriod=30 Oct 06 07:33:41 crc kubenswrapper[4769]: I1006 07:33:41.937076 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c81ea12-eef8-4593-9181-078ce593c881" containerID="81765bb123a39eecb5ac14cd78af8ebb8165fa49c4a70bc5c4b8384cb8ebdf41" exitCode=0 Oct 06 07:33:41 crc kubenswrapper[4769]: I1006 07:33:41.937435 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c81ea12-eef8-4593-9181-078ce593c881" containerID="c64228d7755702af89cb6ec9e234228cec7e5e162e2f76876e2e2cb0b585bd9c" exitCode=2 Oct 06 07:33:41 crc kubenswrapper[4769]: I1006 07:33:41.937131 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c81ea12-eef8-4593-9181-078ce593c881","Type":"ContainerDied","Data":"81765bb123a39eecb5ac14cd78af8ebb8165fa49c4a70bc5c4b8384cb8ebdf41"} Oct 06 07:33:41 crc kubenswrapper[4769]: I1006 07:33:41.937479 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c81ea12-eef8-4593-9181-078ce593c881","Type":"ContainerDied","Data":"c64228d7755702af89cb6ec9e234228cec7e5e162e2f76876e2e2cb0b585bd9c"} Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.309905 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-njskw"] Oct 06 07:33:42 crc kubenswrapper[4769]: E1006 07:33:42.310266 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797de656-8644-4e69-9bd9-a568002a8413" containerName="mariadb-account-create" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.310278 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="797de656-8644-4e69-9bd9-a568002a8413" containerName="mariadb-account-create" Oct 06 07:33:42 crc kubenswrapper[4769]: E1006 07:33:42.310297 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09c3d22-3073-474f-8465-db74ea316281" containerName="mariadb-account-create" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.310304 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09c3d22-3073-474f-8465-db74ea316281" containerName="mariadb-account-create" Oct 06 07:33:42 crc kubenswrapper[4769]: E1006 07:33:42.310325 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dfbf184-0c47-4c5b-aa47-70d7605b437b" containerName="mariadb-account-create" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.310331 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dfbf184-0c47-4c5b-aa47-70d7605b437b" containerName="mariadb-account-create" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.310531 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="797de656-8644-4e69-9bd9-a568002a8413" containerName="mariadb-account-create" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.310545 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dfbf184-0c47-4c5b-aa47-70d7605b437b" containerName="mariadb-account-create" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.310562 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d09c3d22-3073-474f-8465-db74ea316281" containerName="mariadb-account-create" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.311079 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.313455 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vhjk5" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.313779 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.313795 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.355998 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-njskw"] Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.454300 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-njskw\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.454353 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-scripts\") pod \"nova-cell0-conductor-db-sync-njskw\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.455021 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-config-data\") pod \"nova-cell0-conductor-db-sync-njskw\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.455076 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klf95\" (UniqueName: \"kubernetes.io/projected/8f3ed31e-1322-4476-85f6-398b2366a129-kube-api-access-klf95\") pod \"nova-cell0-conductor-db-sync-njskw\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.557590 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-njskw\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.557644 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-scripts\") pod \"nova-cell0-conductor-db-sync-njskw\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.557740 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-config-data\") pod \"nova-cell0-conductor-db-sync-njskw\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.557972 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klf95\" (UniqueName: \"kubernetes.io/projected/8f3ed31e-1322-4476-85f6-398b2366a129-kube-api-access-klf95\") pod \"nova-cell0-conductor-db-sync-njskw\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.563139 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-scripts\") pod \"nova-cell0-conductor-db-sync-njskw\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.563149 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-njskw\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.563288 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-config-data\") pod \"nova-cell0-conductor-db-sync-njskw\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.587806 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klf95\" (UniqueName: \"kubernetes.io/projected/8f3ed31e-1322-4476-85f6-398b2366a129-kube-api-access-klf95\") pod \"nova-cell0-conductor-db-sync-njskw\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:42 crc kubenswrapper[4769]: I1006 07:33:42.656640 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:33:43 crc kubenswrapper[4769]: I1006 07:33:43.162831 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-njskw"] Oct 06 07:33:43 crc kubenswrapper[4769]: I1006 07:33:43.964336 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-njskw" event={"ID":"8f3ed31e-1322-4476-85f6-398b2366a129","Type":"ContainerStarted","Data":"980634309a0ea48860314ee07c1480bf1c1ae52400266d7f619a8f712e9afb30"} Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.021649 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c81ea12-eef8-4593-9181-078ce593c881" containerID="2f9b712ea83532a44315216eb69cb48f479b8c58f920d04357e5d791746cbf6d" exitCode=0 Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.021986 4769 generic.go:334] "Generic (PLEG): container finished" podID="9c81ea12-eef8-4593-9181-078ce593c881" containerID="0b1ff6e4f5d576d6b8114814ef34b4c06e896dbc41f8512926a0b971f5160ada" exitCode=0 Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.021702 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c81ea12-eef8-4593-9181-078ce593c881","Type":"ContainerDied","Data":"2f9b712ea83532a44315216eb69cb48f479b8c58f920d04357e5d791746cbf6d"} Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.022025 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c81ea12-eef8-4593-9181-078ce593c881","Type":"ContainerDied","Data":"0b1ff6e4f5d576d6b8114814ef34b4c06e896dbc41f8512926a0b971f5160ada"} Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.282282 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.412397 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-sg-core-conf-yaml\") pod \"9c81ea12-eef8-4593-9181-078ce593c881\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.412531 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-combined-ca-bundle\") pod \"9c81ea12-eef8-4593-9181-078ce593c881\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.412550 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-config-data\") pod \"9c81ea12-eef8-4593-9181-078ce593c881\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.412623 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c81ea12-eef8-4593-9181-078ce593c881-log-httpd\") pod \"9c81ea12-eef8-4593-9181-078ce593c881\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.412646 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzgtr\" (UniqueName: \"kubernetes.io/projected/9c81ea12-eef8-4593-9181-078ce593c881-kube-api-access-vzgtr\") pod \"9c81ea12-eef8-4593-9181-078ce593c881\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.412691 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c81ea12-eef8-4593-9181-078ce593c881-run-httpd\") pod \"9c81ea12-eef8-4593-9181-078ce593c881\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.412724 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-scripts\") pod \"9c81ea12-eef8-4593-9181-078ce593c881\" (UID: \"9c81ea12-eef8-4593-9181-078ce593c881\") " Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.413810 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c81ea12-eef8-4593-9181-078ce593c881-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9c81ea12-eef8-4593-9181-078ce593c881" (UID: "9c81ea12-eef8-4593-9181-078ce593c881"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.414006 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c81ea12-eef8-4593-9181-078ce593c881-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9c81ea12-eef8-4593-9181-078ce593c881" (UID: "9c81ea12-eef8-4593-9181-078ce593c881"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.419535 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-scripts" (OuterVolumeSpecName: "scripts") pod "9c81ea12-eef8-4593-9181-078ce593c881" (UID: "9c81ea12-eef8-4593-9181-078ce593c881"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.419544 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c81ea12-eef8-4593-9181-078ce593c881-kube-api-access-vzgtr" (OuterVolumeSpecName: "kube-api-access-vzgtr") pod "9c81ea12-eef8-4593-9181-078ce593c881" (UID: "9c81ea12-eef8-4593-9181-078ce593c881"). InnerVolumeSpecName "kube-api-access-vzgtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.446842 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9c81ea12-eef8-4593-9181-078ce593c881" (UID: "9c81ea12-eef8-4593-9181-078ce593c881"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.497236 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c81ea12-eef8-4593-9181-078ce593c881" (UID: "9c81ea12-eef8-4593-9181-078ce593c881"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.514623 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.514653 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c81ea12-eef8-4593-9181-078ce593c881-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.514663 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzgtr\" (UniqueName: \"kubernetes.io/projected/9c81ea12-eef8-4593-9181-078ce593c881-kube-api-access-vzgtr\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.514672 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9c81ea12-eef8-4593-9181-078ce593c881-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.514682 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.514693 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.516862 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-config-data" (OuterVolumeSpecName: "config-data") pod "9c81ea12-eef8-4593-9181-078ce593c881" (UID: "9c81ea12-eef8-4593-9181-078ce593c881"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:33:45 crc kubenswrapper[4769]: I1006 07:33:45.616898 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c81ea12-eef8-4593-9181-078ce593c881-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.038358 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9c81ea12-eef8-4593-9181-078ce593c881","Type":"ContainerDied","Data":"0cf275d58ea142055ea03df777b8da1b02f5717aa1d4f7a5e26fa8a9f7df2479"} Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.038405 4769 scope.go:117] "RemoveContainer" containerID="81765bb123a39eecb5ac14cd78af8ebb8165fa49c4a70bc5c4b8384cb8ebdf41" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.038560 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.064664 4769 scope.go:117] "RemoveContainer" containerID="c64228d7755702af89cb6ec9e234228cec7e5e162e2f76876e2e2cb0b585bd9c" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.093004 4769 scope.go:117] "RemoveContainer" containerID="2f9b712ea83532a44315216eb69cb48f479b8c58f920d04357e5d791746cbf6d" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.101884 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.122446 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.137912 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:46 crc kubenswrapper[4769]: E1006 07:33:46.140105 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="ceilometer-central-agent" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.140288 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="ceilometer-central-agent" Oct 06 07:33:46 crc kubenswrapper[4769]: E1006 07:33:46.140431 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="ceilometer-notification-agent" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.140525 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="ceilometer-notification-agent" Oct 06 07:33:46 crc kubenswrapper[4769]: E1006 07:33:46.140675 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="proxy-httpd" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.140767 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="proxy-httpd" Oct 06 07:33:46 crc kubenswrapper[4769]: E1006 07:33:46.140849 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="sg-core" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.141139 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="sg-core" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.141738 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="ceilometer-notification-agent" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.141849 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="sg-core" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.141982 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="ceilometer-central-agent" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.142084 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c81ea12-eef8-4593-9181-078ce593c881" containerName="proxy-httpd" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.146059 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.147685 4769 scope.go:117] "RemoveContainer" containerID="0b1ff6e4f5d576d6b8114814ef34b4c06e896dbc41f8512926a0b971f5160ada" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.148349 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.152339 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.150601 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.181822 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c81ea12-eef8-4593-9181-078ce593c881" path="/var/lib/kubelet/pods/9c81ea12-eef8-4593-9181-078ce593c881/volumes" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.332684 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd9d0680-9974-4cce-b453-715607bba6ff-run-httpd\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.332728 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.332965 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.333102 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-config-data\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.333131 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd9d0680-9974-4cce-b453-715607bba6ff-log-httpd\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.333283 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-scripts\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.333341 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87m4j\" (UniqueName: \"kubernetes.io/projected/fd9d0680-9974-4cce-b453-715607bba6ff-kube-api-access-87m4j\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.434738 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd9d0680-9974-4cce-b453-715607bba6ff-run-httpd\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.434786 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.434839 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.434878 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-config-data\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.434895 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd9d0680-9974-4cce-b453-715607bba6ff-log-httpd\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.434917 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-scripts\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.434938 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87m4j\" (UniqueName: \"kubernetes.io/projected/fd9d0680-9974-4cce-b453-715607bba6ff-kube-api-access-87m4j\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.435584 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd9d0680-9974-4cce-b453-715607bba6ff-run-httpd\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.436446 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd9d0680-9974-4cce-b453-715607bba6ff-log-httpd\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.439173 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.439958 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-scripts\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.440017 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-config-data\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.440541 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.450270 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87m4j\" (UniqueName: \"kubernetes.io/projected/fd9d0680-9974-4cce-b453-715607bba6ff-kube-api-access-87m4j\") pod \"ceilometer-0\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.470156 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:33:46 crc kubenswrapper[4769]: I1006 07:33:46.930508 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:33:47 crc kubenswrapper[4769]: I1006 07:33:47.053892 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd9d0680-9974-4cce-b453-715607bba6ff","Type":"ContainerStarted","Data":"7189950a2d65268f6f281f5a5484bbec54771823c0afeebcb8485448bca9dae6"} Oct 06 07:33:48 crc kubenswrapper[4769]: I1006 07:33:48.065168 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd9d0680-9974-4cce-b453-715607bba6ff","Type":"ContainerStarted","Data":"f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60"} Oct 06 07:33:48 crc kubenswrapper[4769]: I1006 07:33:48.065630 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd9d0680-9974-4cce-b453-715607bba6ff","Type":"ContainerStarted","Data":"ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3"} Oct 06 07:33:49 crc kubenswrapper[4769]: I1006 07:33:49.077896 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd9d0680-9974-4cce-b453-715607bba6ff","Type":"ContainerStarted","Data":"e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01"} Oct 06 07:33:49 crc kubenswrapper[4769]: E1006 07:33:49.348408 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c81ea12_eef8_4593_9181_078ce593c881.slice/crio-conmon-2f9b712ea83532a44315216eb69cb48f479b8c58f920d04357e5d791746cbf6d.scope\": RecentStats: unable to find data in memory cache]" Oct 06 07:33:54 crc kubenswrapper[4769]: I1006 07:33:54.122695 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-njskw" event={"ID":"8f3ed31e-1322-4476-85f6-398b2366a129","Type":"ContainerStarted","Data":"d738b0ef934e51c1f84b7689c836873ad7e231b9a2184ceb0ba02c92efa7418a"} Oct 06 07:33:54 crc kubenswrapper[4769]: I1006 07:33:54.126251 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd9d0680-9974-4cce-b453-715607bba6ff","Type":"ContainerStarted","Data":"f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054"} Oct 06 07:33:54 crc kubenswrapper[4769]: I1006 07:33:54.126513 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 07:33:54 crc kubenswrapper[4769]: I1006 07:33:54.155317 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-njskw" podStartSLOduration=1.7325761160000002 podStartE2EDuration="12.155293301s" podCreationTimestamp="2025-10-06 07:33:42 +0000 UTC" firstStartedPulling="2025-10-06 07:33:43.179842152 +0000 UTC m=+1019.704123299" lastFinishedPulling="2025-10-06 07:33:53.602559297 +0000 UTC m=+1030.126840484" observedRunningTime="2025-10-06 07:33:54.152441993 +0000 UTC m=+1030.676723150" watchObservedRunningTime="2025-10-06 07:33:54.155293301 +0000 UTC m=+1030.679574488" Oct 06 07:33:54 crc kubenswrapper[4769]: I1006 07:33:54.188181 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.526892717 podStartE2EDuration="8.18816416s" podCreationTimestamp="2025-10-06 07:33:46 +0000 UTC" firstStartedPulling="2025-10-06 07:33:46.941001785 +0000 UTC m=+1023.465282932" lastFinishedPulling="2025-10-06 07:33:53.602273218 +0000 UTC m=+1030.126554375" observedRunningTime="2025-10-06 07:33:54.187242384 +0000 UTC m=+1030.711523541" watchObservedRunningTime="2025-10-06 07:33:54.18816416 +0000 UTC m=+1030.712445307" Oct 06 07:33:59 crc kubenswrapper[4769]: E1006 07:33:59.597572 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c81ea12_eef8_4593_9181_078ce593c881.slice/crio-conmon-2f9b712ea83532a44315216eb69cb48f479b8c58f920d04357e5d791746cbf6d.scope\": RecentStats: unable to find data in memory cache]" Oct 06 07:34:07 crc kubenswrapper[4769]: I1006 07:34:07.272840 4769 generic.go:334] "Generic (PLEG): container finished" podID="8f3ed31e-1322-4476-85f6-398b2366a129" containerID="d738b0ef934e51c1f84b7689c836873ad7e231b9a2184ceb0ba02c92efa7418a" exitCode=0 Oct 06 07:34:07 crc kubenswrapper[4769]: I1006 07:34:07.272968 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-njskw" event={"ID":"8f3ed31e-1322-4476-85f6-398b2366a129","Type":"ContainerDied","Data":"d738b0ef934e51c1f84b7689c836873ad7e231b9a2184ceb0ba02c92efa7418a"} Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.683726 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.851768 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-combined-ca-bundle\") pod \"8f3ed31e-1322-4476-85f6-398b2366a129\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.851849 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klf95\" (UniqueName: \"kubernetes.io/projected/8f3ed31e-1322-4476-85f6-398b2366a129-kube-api-access-klf95\") pod \"8f3ed31e-1322-4476-85f6-398b2366a129\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.851927 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-scripts\") pod \"8f3ed31e-1322-4476-85f6-398b2366a129\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.853354 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-config-data\") pod \"8f3ed31e-1322-4476-85f6-398b2366a129\" (UID: \"8f3ed31e-1322-4476-85f6-398b2366a129\") " Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.858009 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f3ed31e-1322-4476-85f6-398b2366a129-kube-api-access-klf95" (OuterVolumeSpecName: "kube-api-access-klf95") pod "8f3ed31e-1322-4476-85f6-398b2366a129" (UID: "8f3ed31e-1322-4476-85f6-398b2366a129"). InnerVolumeSpecName "kube-api-access-klf95". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.860590 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-scripts" (OuterVolumeSpecName: "scripts") pod "8f3ed31e-1322-4476-85f6-398b2366a129" (UID: "8f3ed31e-1322-4476-85f6-398b2366a129"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.880571 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f3ed31e-1322-4476-85f6-398b2366a129" (UID: "8f3ed31e-1322-4476-85f6-398b2366a129"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.902129 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-config-data" (OuterVolumeSpecName: "config-data") pod "8f3ed31e-1322-4476-85f6-398b2366a129" (UID: "8f3ed31e-1322-4476-85f6-398b2366a129"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.956290 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.956326 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.956349 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klf95\" (UniqueName: \"kubernetes.io/projected/8f3ed31e-1322-4476-85f6-398b2366a129-kube-api-access-klf95\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:08 crc kubenswrapper[4769]: I1006 07:34:08.956368 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f3ed31e-1322-4476-85f6-398b2366a129-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.301232 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-njskw" event={"ID":"8f3ed31e-1322-4476-85f6-398b2366a129","Type":"ContainerDied","Data":"980634309a0ea48860314ee07c1480bf1c1ae52400266d7f619a8f712e9afb30"} Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.301302 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="980634309a0ea48860314ee07c1480bf1c1ae52400266d7f619a8f712e9afb30" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.301322 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-njskw" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.421580 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 06 07:34:09 crc kubenswrapper[4769]: E1006 07:34:09.422236 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f3ed31e-1322-4476-85f6-398b2366a129" containerName="nova-cell0-conductor-db-sync" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.422260 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f3ed31e-1322-4476-85f6-398b2366a129" containerName="nova-cell0-conductor-db-sync" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.422501 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f3ed31e-1322-4476-85f6-398b2366a129" containerName="nova-cell0-conductor-db-sync" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.423950 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.427167 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.427386 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vhjk5" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.437504 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.465120 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62\") " pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.465231 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62\") " pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.465272 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfvht\" (UniqueName: \"kubernetes.io/projected/02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62-kube-api-access-vfvht\") pod \"nova-cell0-conductor-0\" (UID: \"02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62\") " pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.567555 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62\") " pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.567957 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62\") " pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.567991 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfvht\" (UniqueName: \"kubernetes.io/projected/02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62-kube-api-access-vfvht\") pod \"nova-cell0-conductor-0\" (UID: \"02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62\") " pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.572190 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62\") " pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.580143 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62\") " pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.583605 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfvht\" (UniqueName: \"kubernetes.io/projected/02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62-kube-api-access-vfvht\") pod \"nova-cell0-conductor-0\" (UID: \"02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62\") " pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:09 crc kubenswrapper[4769]: I1006 07:34:09.746714 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:09 crc kubenswrapper[4769]: E1006 07:34:09.826020 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c81ea12_eef8_4593_9181_078ce593c881.slice/crio-conmon-2f9b712ea83532a44315216eb69cb48f479b8c58f920d04357e5d791746cbf6d.scope\": RecentStats: unable to find data in memory cache]" Oct 06 07:34:10 crc kubenswrapper[4769]: I1006 07:34:10.258001 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 06 07:34:10 crc kubenswrapper[4769]: W1006 07:34:10.276387 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02b2e9f3_0bde_4259_bb5d_1dd15f3e1e62.slice/crio-3cb64657c7561e0b9cb111ac29845d7ffd021c953b706fe9703ebc5d6aec610d WatchSource:0}: Error finding container 3cb64657c7561e0b9cb111ac29845d7ffd021c953b706fe9703ebc5d6aec610d: Status 404 returned error can't find the container with id 3cb64657c7561e0b9cb111ac29845d7ffd021c953b706fe9703ebc5d6aec610d Oct 06 07:34:10 crc kubenswrapper[4769]: I1006 07:34:10.316996 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62","Type":"ContainerStarted","Data":"3cb64657c7561e0b9cb111ac29845d7ffd021c953b706fe9703ebc5d6aec610d"} Oct 06 07:34:11 crc kubenswrapper[4769]: I1006 07:34:11.327656 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62","Type":"ContainerStarted","Data":"afb3376cc7e8d1e966838d7d894299c23949f0f13a9839023706b40371d391b0"} Oct 06 07:34:11 crc kubenswrapper[4769]: I1006 07:34:11.328290 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:11 crc kubenswrapper[4769]: I1006 07:34:11.351150 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.351129393 podStartE2EDuration="2.351129393s" podCreationTimestamp="2025-10-06 07:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:34:11.345267573 +0000 UTC m=+1047.869548720" watchObservedRunningTime="2025-10-06 07:34:11.351129393 +0000 UTC m=+1047.875410540" Oct 06 07:34:16 crc kubenswrapper[4769]: I1006 07:34:16.475865 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 06 07:34:19 crc kubenswrapper[4769]: I1006 07:34:19.788088 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 06 07:34:20 crc kubenswrapper[4769]: E1006 07:34:20.048471 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c81ea12_eef8_4593_9181_078ce593c881.slice/crio-conmon-2f9b712ea83532a44315216eb69cb48f479b8c58f920d04357e5d791746cbf6d.scope\": RecentStats: unable to find data in memory cache]" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.230022 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.230225 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d657d340-ec67-493c-8403-4bffd42ae0b3" containerName="kube-state-metrics" containerID="cri-o://0f72a0997ed6f8c0590b272fee7dacc0a74eb6e558de7c43e3482e258fcae7e4" gracePeriod=30 Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.352488 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-p94mc"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.354266 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.357703 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.358124 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.380688 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-p94mc"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.453080 4769 generic.go:334] "Generic (PLEG): container finished" podID="d657d340-ec67-493c-8403-4bffd42ae0b3" containerID="0f72a0997ed6f8c0590b272fee7dacc0a74eb6e558de7c43e3482e258fcae7e4" exitCode=2 Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.453115 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d657d340-ec67-493c-8403-4bffd42ae0b3","Type":"ContainerDied","Data":"0f72a0997ed6f8c0590b272fee7dacc0a74eb6e558de7c43e3482e258fcae7e4"} Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.461344 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-config-data\") pod \"nova-cell0-cell-mapping-p94mc\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.461434 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p94mc\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.461500 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-scripts\") pod \"nova-cell0-cell-mapping-p94mc\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.461520 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk7b8\" (UniqueName: \"kubernetes.io/projected/529a47ef-9cb3-4d79-9227-66910a7389e9-kube-api-access-hk7b8\") pod \"nova-cell0-cell-mapping-p94mc\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.526589 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.534482 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.538566 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.546484 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.549732 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.561139 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.562681 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p94mc\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.562955 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-scripts\") pod \"nova-cell0-cell-mapping-p94mc\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.562977 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk7b8\" (UniqueName: \"kubernetes.io/projected/529a47ef-9cb3-4d79-9227-66910a7389e9-kube-api-access-hk7b8\") pod \"nova-cell0-cell-mapping-p94mc\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.563044 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-config-data\") pod \"nova-cell0-cell-mapping-p94mc\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.584127 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-scripts\") pod \"nova-cell0-cell-mapping-p94mc\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.588790 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p94mc\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.595049 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-config-data\") pod \"nova-cell0-cell-mapping-p94mc\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.611993 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk7b8\" (UniqueName: \"kubernetes.io/projected/529a47ef-9cb3-4d79-9227-66910a7389e9-kube-api-access-hk7b8\") pod \"nova-cell0-cell-mapping-p94mc\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.624481 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.649915 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.666060 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99018fb-2572-49ab-80e7-e6797a99a442-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e99018fb-2572-49ab-80e7-e6797a99a442\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.666121 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-config-data\") pod \"nova-api-0\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.666151 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-logs\") pod \"nova-api-0\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.666177 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6lbz\" (UniqueName: \"kubernetes.io/projected/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-kube-api-access-s6lbz\") pod \"nova-api-0\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.666191 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7wjc\" (UniqueName: \"kubernetes.io/projected/e99018fb-2572-49ab-80e7-e6797a99a442-kube-api-access-h7wjc\") pod \"nova-scheduler-0\" (UID: \"e99018fb-2572-49ab-80e7-e6797a99a442\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.666219 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99018fb-2572-49ab-80e7-e6797a99a442-config-data\") pod \"nova-scheduler-0\" (UID: \"e99018fb-2572-49ab-80e7-e6797a99a442\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.666257 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.682916 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.684096 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.686744 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.693355 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.732897 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.802088 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.802387 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99018fb-2572-49ab-80e7-e6797a99a442-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e99018fb-2572-49ab-80e7-e6797a99a442\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.803105 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-config-data\") pod \"nova-api-0\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.803196 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-logs\") pod \"nova-api-0\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.803251 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6lbz\" (UniqueName: \"kubernetes.io/projected/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-kube-api-access-s6lbz\") pod \"nova-api-0\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.803270 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7wjc\" (UniqueName: \"kubernetes.io/projected/e99018fb-2572-49ab-80e7-e6797a99a442-kube-api-access-h7wjc\") pod \"nova-scheduler-0\" (UID: \"e99018fb-2572-49ab-80e7-e6797a99a442\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.806117 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-logs\") pod \"nova-api-0\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.806239 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99018fb-2572-49ab-80e7-e6797a99a442-config-data\") pod \"nova-scheduler-0\" (UID: \"e99018fb-2572-49ab-80e7-e6797a99a442\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.816015 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.818201 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99018fb-2572-49ab-80e7-e6797a99a442-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e99018fb-2572-49ab-80e7-e6797a99a442\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.818360 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-config-data\") pod \"nova-api-0\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.847392 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99018fb-2572-49ab-80e7-e6797a99a442-config-data\") pod \"nova-scheduler-0\" (UID: \"e99018fb-2572-49ab-80e7-e6797a99a442\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.847960 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.848289 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.889486 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6lbz\" (UniqueName: \"kubernetes.io/projected/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-kube-api-access-s6lbz\") pod \"nova-api-0\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " pod="openstack/nova-api-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.895644 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7wjc\" (UniqueName: \"kubernetes.io/projected/e99018fb-2572-49ab-80e7-e6797a99a442-kube-api-access-h7wjc\") pod \"nova-scheduler-0\" (UID: \"e99018fb-2572-49ab-80e7-e6797a99a442\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.895920 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.901518 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.912127 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f2521c-a4c7-4bec-80e3-6126d1a36579-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"96f2521c-a4c7-4bec-80e3-6126d1a36579\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.912400 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f2521c-a4c7-4bec-80e3-6126d1a36579-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"96f2521c-a4c7-4bec-80e3-6126d1a36579\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.912441 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kqg\" (UniqueName: \"kubernetes.io/projected/96f2521c-a4c7-4bec-80e3-6126d1a36579-kube-api-access-l2kqg\") pod \"nova-cell1-novncproxy-0\" (UID: \"96f2521c-a4c7-4bec-80e3-6126d1a36579\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.919753 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5798985649-7mtrk"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.921262 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.934853 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5798985649-7mtrk"] Oct 06 07:34:20 crc kubenswrapper[4769]: I1006 07:34:20.938653 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.014344 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp9ch\" (UniqueName: \"kubernetes.io/projected/5dadcedc-6861-47a3-a7db-8eb843ff98e5-kube-api-access-sp9ch\") pod \"nova-metadata-0\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.014515 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f2521c-a4c7-4bec-80e3-6126d1a36579-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"96f2521c-a4c7-4bec-80e3-6126d1a36579\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.014552 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2kqg\" (UniqueName: \"kubernetes.io/projected/96f2521c-a4c7-4bec-80e3-6126d1a36579-kube-api-access-l2kqg\") pod \"nova-cell1-novncproxy-0\" (UID: \"96f2521c-a4c7-4bec-80e3-6126d1a36579\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.014584 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dadcedc-6861-47a3-a7db-8eb843ff98e5-logs\") pod \"nova-metadata-0\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.015030 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dadcedc-6861-47a3-a7db-8eb843ff98e5-config-data\") pod \"nova-metadata-0\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.015124 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f2521c-a4c7-4bec-80e3-6126d1a36579-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"96f2521c-a4c7-4bec-80e3-6126d1a36579\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.015230 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dadcedc-6861-47a3-a7db-8eb843ff98e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.019872 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f2521c-a4c7-4bec-80e3-6126d1a36579-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"96f2521c-a4c7-4bec-80e3-6126d1a36579\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.022988 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f2521c-a4c7-4bec-80e3-6126d1a36579-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"96f2521c-a4c7-4bec-80e3-6126d1a36579\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.028873 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2kqg\" (UniqueName: \"kubernetes.io/projected/96f2521c-a4c7-4bec-80e3-6126d1a36579-kube-api-access-l2kqg\") pod \"nova-cell1-novncproxy-0\" (UID: \"96f2521c-a4c7-4bec-80e3-6126d1a36579\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.034983 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.074486 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.116243 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snmqn\" (UniqueName: \"kubernetes.io/projected/d657d340-ec67-493c-8403-4bffd42ae0b3-kube-api-access-snmqn\") pod \"d657d340-ec67-493c-8403-4bffd42ae0b3\" (UID: \"d657d340-ec67-493c-8403-4bffd42ae0b3\") " Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.116739 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-ovsdbserver-sb\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.116791 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-dns-svc\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.116834 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dadcedc-6861-47a3-a7db-8eb843ff98e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.116878 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-ovsdbserver-nb\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.116896 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp9ch\" (UniqueName: \"kubernetes.io/projected/5dadcedc-6861-47a3-a7db-8eb843ff98e5-kube-api-access-sp9ch\") pod \"nova-metadata-0\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.116913 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-dns-swift-storage-0\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.116941 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6npb\" (UniqueName: \"kubernetes.io/projected/d17e6089-8fb3-4ff8-b603-7a266693936a-kube-api-access-r6npb\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.116981 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-config\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.117000 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dadcedc-6861-47a3-a7db-8eb843ff98e5-logs\") pod \"nova-metadata-0\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.117019 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dadcedc-6861-47a3-a7db-8eb843ff98e5-config-data\") pod \"nova-metadata-0\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.118490 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dadcedc-6861-47a3-a7db-8eb843ff98e5-logs\") pod \"nova-metadata-0\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.121092 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dadcedc-6861-47a3-a7db-8eb843ff98e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.121341 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dadcedc-6861-47a3-a7db-8eb843ff98e5-config-data\") pod \"nova-metadata-0\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.127076 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d657d340-ec67-493c-8403-4bffd42ae0b3-kube-api-access-snmqn" (OuterVolumeSpecName: "kube-api-access-snmqn") pod "d657d340-ec67-493c-8403-4bffd42ae0b3" (UID: "d657d340-ec67-493c-8403-4bffd42ae0b3"). InnerVolumeSpecName "kube-api-access-snmqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.139128 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp9ch\" (UniqueName: \"kubernetes.io/projected/5dadcedc-6861-47a3-a7db-8eb843ff98e5-kube-api-access-sp9ch\") pod \"nova-metadata-0\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.199219 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.218099 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-ovsdbserver-nb\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.218137 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-dns-swift-storage-0\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.218172 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6npb\" (UniqueName: \"kubernetes.io/projected/d17e6089-8fb3-4ff8-b603-7a266693936a-kube-api-access-r6npb\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.218215 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-config\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.218265 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-ovsdbserver-sb\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.218306 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-dns-svc\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.218381 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snmqn\" (UniqueName: \"kubernetes.io/projected/d657d340-ec67-493c-8403-4bffd42ae0b3-kube-api-access-snmqn\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.219193 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-dns-svc\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.219733 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-config\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.219964 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-ovsdbserver-nb\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.220253 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-ovsdbserver-sb\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.220533 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-dns-swift-storage-0\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.236480 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.241058 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6npb\" (UniqueName: \"kubernetes.io/projected/d17e6089-8fb3-4ff8-b603-7a266693936a-kube-api-access-r6npb\") pod \"dnsmasq-dns-5798985649-7mtrk\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.249522 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.315713 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-csxd9"] Oct 06 07:34:21 crc kubenswrapper[4769]: E1006 07:34:21.316106 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d657d340-ec67-493c-8403-4bffd42ae0b3" containerName="kube-state-metrics" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.316117 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d657d340-ec67-493c-8403-4bffd42ae0b3" containerName="kube-state-metrics" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.316281 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d657d340-ec67-493c-8403-4bffd42ae0b3" containerName="kube-state-metrics" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.322954 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.328082 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.328330 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.364413 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-csxd9"] Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.432078 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-p94mc"] Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.482260 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d657d340-ec67-493c-8403-4bffd42ae0b3","Type":"ContainerDied","Data":"c791c199b6292586f4c2049fdb1fade446abceb37d3f0ad249cc2ecbc306bac1"} Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.482320 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.482325 4769 scope.go:117] "RemoveContainer" containerID="0f72a0997ed6f8c0590b272fee7dacc0a74eb6e558de7c43e3482e258fcae7e4" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.484791 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.485992 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p94mc" event={"ID":"529a47ef-9cb3-4d79-9227-66910a7389e9","Type":"ContainerStarted","Data":"2714aeab93b7eea63e879d94422412b53baab53d39f52d668dba779c7a762a8e"} Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.525979 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-config-data\") pod \"nova-cell1-conductor-db-sync-csxd9\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.526072 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2h2m\" (UniqueName: \"kubernetes.io/projected/6c3b8223-fede-4508-a234-c32e9cc406c5-kube-api-access-m2h2m\") pod \"nova-cell1-conductor-db-sync-csxd9\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.526128 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-csxd9\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.526157 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-scripts\") pod \"nova-cell1-conductor-db-sync-csxd9\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.577726 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.586710 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.595360 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.598028 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.603444 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.605374 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.605448 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.630147 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-config-data\") pod \"nova-cell1-conductor-db-sync-csxd9\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.630207 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d8005a18-8df9-47d9-a446-9e2b18d04409-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d8005a18-8df9-47d9-a446-9e2b18d04409\") " pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.630244 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45zw8\" (UniqueName: \"kubernetes.io/projected/d8005a18-8df9-47d9-a446-9e2b18d04409-kube-api-access-45zw8\") pod \"kube-state-metrics-0\" (UID: \"d8005a18-8df9-47d9-a446-9e2b18d04409\") " pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.630282 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8005a18-8df9-47d9-a446-9e2b18d04409-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d8005a18-8df9-47d9-a446-9e2b18d04409\") " pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.630314 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2h2m\" (UniqueName: \"kubernetes.io/projected/6c3b8223-fede-4508-a234-c32e9cc406c5-kube-api-access-m2h2m\") pod \"nova-cell1-conductor-db-sync-csxd9\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.630339 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8005a18-8df9-47d9-a446-9e2b18d04409-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d8005a18-8df9-47d9-a446-9e2b18d04409\") " pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.630379 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-csxd9\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.630408 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-scripts\") pod \"nova-cell1-conductor-db-sync-csxd9\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.634104 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-scripts\") pod \"nova-cell1-conductor-db-sync-csxd9\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.636123 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-csxd9\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.637472 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-config-data\") pod \"nova-cell1-conductor-db-sync-csxd9\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.684739 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2h2m\" (UniqueName: \"kubernetes.io/projected/6c3b8223-fede-4508-a234-c32e9cc406c5-kube-api-access-m2h2m\") pod \"nova-cell1-conductor-db-sync-csxd9\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.724893 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:34:21 crc kubenswrapper[4769]: W1006 07:34:21.730076 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode99018fb_2572_49ab_80e7_e6797a99a442.slice/crio-4ac2832ccfe6f4cbeccc1a05978dee0707796ee3c7de0f3fa4a681d3997ad936 WatchSource:0}: Error finding container 4ac2832ccfe6f4cbeccc1a05978dee0707796ee3c7de0f3fa4a681d3997ad936: Status 404 returned error can't find the container with id 4ac2832ccfe6f4cbeccc1a05978dee0707796ee3c7de0f3fa4a681d3997ad936 Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.733008 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d8005a18-8df9-47d9-a446-9e2b18d04409-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d8005a18-8df9-47d9-a446-9e2b18d04409\") " pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.740217 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45zw8\" (UniqueName: \"kubernetes.io/projected/d8005a18-8df9-47d9-a446-9e2b18d04409-kube-api-access-45zw8\") pod \"kube-state-metrics-0\" (UID: \"d8005a18-8df9-47d9-a446-9e2b18d04409\") " pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.740332 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8005a18-8df9-47d9-a446-9e2b18d04409-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d8005a18-8df9-47d9-a446-9e2b18d04409\") " pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.740450 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8005a18-8df9-47d9-a446-9e2b18d04409-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d8005a18-8df9-47d9-a446-9e2b18d04409\") " pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.744740 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d8005a18-8df9-47d9-a446-9e2b18d04409-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d8005a18-8df9-47d9-a446-9e2b18d04409\") " pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.747166 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8005a18-8df9-47d9-a446-9e2b18d04409-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d8005a18-8df9-47d9-a446-9e2b18d04409\") " pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.756382 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45zw8\" (UniqueName: \"kubernetes.io/projected/d8005a18-8df9-47d9-a446-9e2b18d04409-kube-api-access-45zw8\") pod \"kube-state-metrics-0\" (UID: \"d8005a18-8df9-47d9-a446-9e2b18d04409\") " pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.760956 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8005a18-8df9-47d9-a446-9e2b18d04409-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d8005a18-8df9-47d9-a446-9e2b18d04409\") " pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.827221 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 07:34:21 crc kubenswrapper[4769]: W1006 07:34:21.838376 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96f2521c_a4c7_4bec_80e3_6126d1a36579.slice/crio-3e000db09731ec2fe2bc7bf8a9a14a481ac581bc84b4d52732d587b5d7fd5184 WatchSource:0}: Error finding container 3e000db09731ec2fe2bc7bf8a9a14a481ac581bc84b4d52732d587b5d7fd5184: Status 404 returned error can't find the container with id 3e000db09731ec2fe2bc7bf8a9a14a481ac581bc84b4d52732d587b5d7fd5184 Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.862644 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.982862 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 06 07:34:21 crc kubenswrapper[4769]: I1006 07:34:21.996760 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.014182 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5798985649-7mtrk"] Oct 06 07:34:22 crc kubenswrapper[4769]: W1006 07:34:22.024817 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dadcedc_6861_47a3_a7db_8eb843ff98e5.slice/crio-5afb5d187765cc15dd5344b228609b0cda34a5cf6b4ed7936a289a31c3df71e8 WatchSource:0}: Error finding container 5afb5d187765cc15dd5344b228609b0cda34a5cf6b4ed7936a289a31c3df71e8: Status 404 returned error can't find the container with id 5afb5d187765cc15dd5344b228609b0cda34a5cf6b4ed7936a289a31c3df71e8 Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.217130 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d657d340-ec67-493c-8403-4bffd42ae0b3" path="/var/lib/kubelet/pods/d657d340-ec67-493c-8403-4bffd42ae0b3/volumes" Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.349160 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.390193 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-csxd9"] Oct 06 07:34:22 crc kubenswrapper[4769]: W1006 07:34:22.429242 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8005a18_8df9_47d9_a446_9e2b18d04409.slice/crio-f55310c3982ffc9f1f1fd9f72061fc46493ee5cde0673985e2f7c7257669bffe WatchSource:0}: Error finding container f55310c3982ffc9f1f1fd9f72061fc46493ee5cde0673985e2f7c7257669bffe: Status 404 returned error can't find the container with id f55310c3982ffc9f1f1fd9f72061fc46493ee5cde0673985e2f7c7257669bffe Oct 06 07:34:22 crc kubenswrapper[4769]: W1006 07:34:22.433797 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c3b8223_fede_4508_a234_c32e9cc406c5.slice/crio-6397f6a9be807fac49a90f0c3e544adae136d2c0b13df54e3b4d3bb7acc2b110 WatchSource:0}: Error finding container 6397f6a9be807fac49a90f0c3e544adae136d2c0b13df54e3b4d3bb7acc2b110: Status 404 returned error can't find the container with id 6397f6a9be807fac49a90f0c3e544adae136d2c0b13df54e3b4d3bb7acc2b110 Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.497222 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e99018fb-2572-49ab-80e7-e6797a99a442","Type":"ContainerStarted","Data":"4ac2832ccfe6f4cbeccc1a05978dee0707796ee3c7de0f3fa4a681d3997ad936"} Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.498784 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"96f2521c-a4c7-4bec-80e3-6126d1a36579","Type":"ContainerStarted","Data":"3e000db09731ec2fe2bc7bf8a9a14a481ac581bc84b4d52732d587b5d7fd5184"} Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.506268 4769 generic.go:334] "Generic (PLEG): container finished" podID="d17e6089-8fb3-4ff8-b603-7a266693936a" containerID="a0a229b246bf27bb22303e771e6e89003f4794450c630171fba97b3324df48f3" exitCode=0 Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.506361 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5798985649-7mtrk" event={"ID":"d17e6089-8fb3-4ff8-b603-7a266693936a","Type":"ContainerDied","Data":"a0a229b246bf27bb22303e771e6e89003f4794450c630171fba97b3324df48f3"} Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.506445 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5798985649-7mtrk" event={"ID":"d17e6089-8fb3-4ff8-b603-7a266693936a","Type":"ContainerStarted","Data":"028c6bdd8afb884f835d09000564305b2ea9be32dfbf63c3f1042e28cf07a2cb"} Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.510148 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p94mc" event={"ID":"529a47ef-9cb3-4d79-9227-66910a7389e9","Type":"ContainerStarted","Data":"eda3c60fd092850f9b4f9387e36dfc0bd4ce5830f25ba159ef40f9df955560cb"} Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.513668 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dadcedc-6861-47a3-a7db-8eb843ff98e5","Type":"ContainerStarted","Data":"5afb5d187765cc15dd5344b228609b0cda34a5cf6b4ed7936a289a31c3df71e8"} Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.514968 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-csxd9" event={"ID":"6c3b8223-fede-4508-a234-c32e9cc406c5","Type":"ContainerStarted","Data":"6397f6a9be807fac49a90f0c3e544adae136d2c0b13df54e3b4d3bb7acc2b110"} Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.517458 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d8005a18-8df9-47d9-a446-9e2b18d04409","Type":"ContainerStarted","Data":"f55310c3982ffc9f1f1fd9f72061fc46493ee5cde0673985e2f7c7257669bffe"} Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.519024 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6","Type":"ContainerStarted","Data":"1b20f91817d565ef9a4e08076c1054d355e39f2c2f7568d7be8737af9a667b10"} Oct 06 07:34:22 crc kubenswrapper[4769]: I1006 07:34:22.558790 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-p94mc" podStartSLOduration=2.558764736 podStartE2EDuration="2.558764736s" podCreationTimestamp="2025-10-06 07:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:34:22.55160773 +0000 UTC m=+1059.075888877" watchObservedRunningTime="2025-10-06 07:34:22.558764736 +0000 UTC m=+1059.083045883" Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.060324 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.061096 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="ceilometer-central-agent" containerID="cri-o://ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3" gracePeriod=30 Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.061245 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="proxy-httpd" containerID="cri-o://f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054" gracePeriod=30 Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.061284 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="sg-core" containerID="cri-o://e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01" gracePeriod=30 Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.061314 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="ceilometer-notification-agent" containerID="cri-o://f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60" gracePeriod=30 Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.530972 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5798985649-7mtrk" event={"ID":"d17e6089-8fb3-4ff8-b603-7a266693936a","Type":"ContainerStarted","Data":"e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c"} Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.531319 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.532476 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-csxd9" event={"ID":"6c3b8223-fede-4508-a234-c32e9cc406c5","Type":"ContainerStarted","Data":"b92a3e0e92d427e1c3033bf48992612173b7ddbb4ce094506db95c30afc9d34b"} Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.535465 4769 generic.go:334] "Generic (PLEG): container finished" podID="fd9d0680-9974-4cce-b453-715607bba6ff" containerID="f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054" exitCode=0 Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.535501 4769 generic.go:334] "Generic (PLEG): container finished" podID="fd9d0680-9974-4cce-b453-715607bba6ff" containerID="e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01" exitCode=2 Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.535511 4769 generic.go:334] "Generic (PLEG): container finished" podID="fd9d0680-9974-4cce-b453-715607bba6ff" containerID="ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3" exitCode=0 Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.535564 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd9d0680-9974-4cce-b453-715607bba6ff","Type":"ContainerDied","Data":"f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054"} Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.535605 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd9d0680-9974-4cce-b453-715607bba6ff","Type":"ContainerDied","Data":"e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01"} Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.535616 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd9d0680-9974-4cce-b453-715607bba6ff","Type":"ContainerDied","Data":"ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3"} Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.548981 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5798985649-7mtrk" podStartSLOduration=3.548965282 podStartE2EDuration="3.548965282s" podCreationTimestamp="2025-10-06 07:34:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:34:23.548437138 +0000 UTC m=+1060.072718305" watchObservedRunningTime="2025-10-06 07:34:23.548965282 +0000 UTC m=+1060.073246429" Oct 06 07:34:23 crc kubenswrapper[4769]: I1006 07:34:23.570238 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-csxd9" podStartSLOduration=2.5702229340000002 podStartE2EDuration="2.570222934s" podCreationTimestamp="2025-10-06 07:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:34:23.566515253 +0000 UTC m=+1060.090796410" watchObservedRunningTime="2025-10-06 07:34:23.570222934 +0000 UTC m=+1060.094504071" Oct 06 07:34:24 crc kubenswrapper[4769]: I1006 07:34:24.803038 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 07:34:24 crc kubenswrapper[4769]: I1006 07:34:24.812705 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.557793 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d8005a18-8df9-47d9-a446-9e2b18d04409","Type":"ContainerStarted","Data":"710290ecbd75e270578c34053b9ee780c8089b0c3b90b2c3c75b0738655a3c97"} Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.558372 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.559643 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e99018fb-2572-49ab-80e7-e6797a99a442","Type":"ContainerStarted","Data":"f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af"} Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.576337 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="96f2521c-a4c7-4bec-80e3-6126d1a36579" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14" gracePeriod=30 Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.576656 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"96f2521c-a4c7-4bec-80e3-6126d1a36579","Type":"ContainerStarted","Data":"b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14"} Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.591708 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.654074242 podStartE2EDuration="4.591691569s" podCreationTimestamp="2025-10-06 07:34:21 +0000 UTC" firstStartedPulling="2025-10-06 07:34:22.437599862 +0000 UTC m=+1058.961881009" lastFinishedPulling="2025-10-06 07:34:23.375217189 +0000 UTC m=+1059.899498336" observedRunningTime="2025-10-06 07:34:25.583844884 +0000 UTC m=+1062.108126031" watchObservedRunningTime="2025-10-06 07:34:25.591691569 +0000 UTC m=+1062.115972716" Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.598291 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6","Type":"ContainerStarted","Data":"8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05"} Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.598329 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6","Type":"ContainerStarted","Data":"74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf"} Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.602494 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.615831197 podStartE2EDuration="5.602476443s" podCreationTimestamp="2025-10-06 07:34:20 +0000 UTC" firstStartedPulling="2025-10-06 07:34:21.841101956 +0000 UTC m=+1058.365383103" lastFinishedPulling="2025-10-06 07:34:24.827747182 +0000 UTC m=+1061.352028349" observedRunningTime="2025-10-06 07:34:25.597777065 +0000 UTC m=+1062.122058212" watchObservedRunningTime="2025-10-06 07:34:25.602476443 +0000 UTC m=+1062.126757590" Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.605462 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dadcedc-6861-47a3-a7db-8eb843ff98e5","Type":"ContainerStarted","Data":"0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4"} Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.605494 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dadcedc-6861-47a3-a7db-8eb843ff98e5","Type":"ContainerStarted","Data":"0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf"} Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.605620 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5dadcedc-6861-47a3-a7db-8eb843ff98e5" containerName="nova-metadata-log" containerID="cri-o://0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf" gracePeriod=30 Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.605799 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5dadcedc-6861-47a3-a7db-8eb843ff98e5" containerName="nova-metadata-metadata" containerID="cri-o://0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4" gracePeriod=30 Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.621274 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.529388752 podStartE2EDuration="5.621258807s" podCreationTimestamp="2025-10-06 07:34:20 +0000 UTC" firstStartedPulling="2025-10-06 07:34:21.735138147 +0000 UTC m=+1058.259419294" lastFinishedPulling="2025-10-06 07:34:24.827008202 +0000 UTC m=+1061.351289349" observedRunningTime="2025-10-06 07:34:25.617537186 +0000 UTC m=+1062.141818333" watchObservedRunningTime="2025-10-06 07:34:25.621258807 +0000 UTC m=+1062.145539954" Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.642637 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.332234301 podStartE2EDuration="5.642619122s" podCreationTimestamp="2025-10-06 07:34:20 +0000 UTC" firstStartedPulling="2025-10-06 07:34:21.516065015 +0000 UTC m=+1058.040346162" lastFinishedPulling="2025-10-06 07:34:24.826449836 +0000 UTC m=+1061.350730983" observedRunningTime="2025-10-06 07:34:25.637204864 +0000 UTC m=+1062.161486011" watchObservedRunningTime="2025-10-06 07:34:25.642619122 +0000 UTC m=+1062.166900269" Oct 06 07:34:25 crc kubenswrapper[4769]: I1006 07:34:25.676126 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.880024544 podStartE2EDuration="5.676106408s" podCreationTimestamp="2025-10-06 07:34:20 +0000 UTC" firstStartedPulling="2025-10-06 07:34:22.030260229 +0000 UTC m=+1058.554541376" lastFinishedPulling="2025-10-06 07:34:24.826342073 +0000 UTC m=+1061.350623240" observedRunningTime="2025-10-06 07:34:25.664161961 +0000 UTC m=+1062.188443128" watchObservedRunningTime="2025-10-06 07:34:25.676106408 +0000 UTC m=+1062.200387555" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.075656 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.200000 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.209660 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.264385 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp9ch\" (UniqueName: \"kubernetes.io/projected/5dadcedc-6861-47a3-a7db-8eb843ff98e5-kube-api-access-sp9ch\") pod \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.264590 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dadcedc-6861-47a3-a7db-8eb843ff98e5-config-data\") pod \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.264618 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dadcedc-6861-47a3-a7db-8eb843ff98e5-combined-ca-bundle\") pod \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.264729 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dadcedc-6861-47a3-a7db-8eb843ff98e5-logs\") pod \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\" (UID: \"5dadcedc-6861-47a3-a7db-8eb843ff98e5\") " Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.265059 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dadcedc-6861-47a3-a7db-8eb843ff98e5-logs" (OuterVolumeSpecName: "logs") pod "5dadcedc-6861-47a3-a7db-8eb843ff98e5" (UID: "5dadcedc-6861-47a3-a7db-8eb843ff98e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.265537 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dadcedc-6861-47a3-a7db-8eb843ff98e5-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.270016 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dadcedc-6861-47a3-a7db-8eb843ff98e5-kube-api-access-sp9ch" (OuterVolumeSpecName: "kube-api-access-sp9ch") pod "5dadcedc-6861-47a3-a7db-8eb843ff98e5" (UID: "5dadcedc-6861-47a3-a7db-8eb843ff98e5"). InnerVolumeSpecName "kube-api-access-sp9ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.311610 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dadcedc-6861-47a3-a7db-8eb843ff98e5-config-data" (OuterVolumeSpecName: "config-data") pod "5dadcedc-6861-47a3-a7db-8eb843ff98e5" (UID: "5dadcedc-6861-47a3-a7db-8eb843ff98e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.332756 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dadcedc-6861-47a3-a7db-8eb843ff98e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5dadcedc-6861-47a3-a7db-8eb843ff98e5" (UID: "5dadcedc-6861-47a3-a7db-8eb843ff98e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.368849 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp9ch\" (UniqueName: \"kubernetes.io/projected/5dadcedc-6861-47a3-a7db-8eb843ff98e5-kube-api-access-sp9ch\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.368887 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dadcedc-6861-47a3-a7db-8eb843ff98e5-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.368900 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5dadcedc-6861-47a3-a7db-8eb843ff98e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.624578 4769 generic.go:334] "Generic (PLEG): container finished" podID="5dadcedc-6861-47a3-a7db-8eb843ff98e5" containerID="0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4" exitCode=0 Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.624624 4769 generic.go:334] "Generic (PLEG): container finished" podID="5dadcedc-6861-47a3-a7db-8eb843ff98e5" containerID="0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf" exitCode=143 Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.624671 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.624742 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dadcedc-6861-47a3-a7db-8eb843ff98e5","Type":"ContainerDied","Data":"0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4"} Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.624882 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dadcedc-6861-47a3-a7db-8eb843ff98e5","Type":"ContainerDied","Data":"0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf"} Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.624917 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5dadcedc-6861-47a3-a7db-8eb843ff98e5","Type":"ContainerDied","Data":"5afb5d187765cc15dd5344b228609b0cda34a5cf6b4ed7936a289a31c3df71e8"} Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.624949 4769 scope.go:117] "RemoveContainer" containerID="0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.666115 4769 scope.go:117] "RemoveContainer" containerID="0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.666231 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.688400 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.693225 4769 scope.go:117] "RemoveContainer" containerID="0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4" Oct 06 07:34:26 crc kubenswrapper[4769]: E1006 07:34:26.693638 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4\": container with ID starting with 0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4 not found: ID does not exist" containerID="0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.693667 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4"} err="failed to get container status \"0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4\": rpc error: code = NotFound desc = could not find container \"0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4\": container with ID starting with 0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4 not found: ID does not exist" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.693686 4769 scope.go:117] "RemoveContainer" containerID="0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf" Oct 06 07:34:26 crc kubenswrapper[4769]: E1006 07:34:26.694063 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf\": container with ID starting with 0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf not found: ID does not exist" containerID="0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.694081 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf"} err="failed to get container status \"0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf\": rpc error: code = NotFound desc = could not find container \"0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf\": container with ID starting with 0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf not found: ID does not exist" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.694095 4769 scope.go:117] "RemoveContainer" containerID="0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.694481 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4"} err="failed to get container status \"0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4\": rpc error: code = NotFound desc = could not find container \"0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4\": container with ID starting with 0b28e7b8e21ea185fb53cb6aea4e30337ac53fc64be6e625792f10d79a57b0b4 not found: ID does not exist" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.694500 4769 scope.go:117] "RemoveContainer" containerID="0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.694911 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf"} err="failed to get container status \"0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf\": rpc error: code = NotFound desc = could not find container \"0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf\": container with ID starting with 0b1674cb822ae4122c71346fcac75ced5bb304b46dd8d9afd853c327096c1cdf not found: ID does not exist" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.707443 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:26 crc kubenswrapper[4769]: E1006 07:34:26.708577 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dadcedc-6861-47a3-a7db-8eb843ff98e5" containerName="nova-metadata-metadata" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.708597 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dadcedc-6861-47a3-a7db-8eb843ff98e5" containerName="nova-metadata-metadata" Oct 06 07:34:26 crc kubenswrapper[4769]: E1006 07:34:26.708615 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dadcedc-6861-47a3-a7db-8eb843ff98e5" containerName="nova-metadata-log" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.708624 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dadcedc-6861-47a3-a7db-8eb843ff98e5" containerName="nova-metadata-log" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.708907 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dadcedc-6861-47a3-a7db-8eb843ff98e5" containerName="nova-metadata-metadata" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.708938 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dadcedc-6861-47a3-a7db-8eb843ff98e5" containerName="nova-metadata-log" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.710232 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.718351 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.718660 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.723525 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.776174 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9cfb\" (UniqueName: \"kubernetes.io/projected/15a6802a-063e-46f1-ad58-4605d677377c-kube-api-access-g9cfb\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.776214 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a6802a-063e-46f1-ad58-4605d677377c-logs\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.776270 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-config-data\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.776309 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.776410 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.877714 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.877769 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.877851 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9cfb\" (UniqueName: \"kubernetes.io/projected/15a6802a-063e-46f1-ad58-4605d677377c-kube-api-access-g9cfb\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.877872 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a6802a-063e-46f1-ad58-4605d677377c-logs\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.877913 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-config-data\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.878317 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a6802a-063e-46f1-ad58-4605d677377c-logs\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.883416 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-config-data\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.883539 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.883740 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:26 crc kubenswrapper[4769]: I1006 07:34:26.893190 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9cfb\" (UniqueName: \"kubernetes.io/projected/15a6802a-063e-46f1-ad58-4605d677377c-kube-api-access-g9cfb\") pod \"nova-metadata-0\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " pod="openstack/nova-metadata-0" Oct 06 07:34:27 crc kubenswrapper[4769]: I1006 07:34:27.039617 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:34:27 crc kubenswrapper[4769]: I1006 07:34:27.524540 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:27 crc kubenswrapper[4769]: W1006 07:34:27.529532 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15a6802a_063e_46f1_ad58_4605d677377c.slice/crio-7d994f960b5b60a213ac1a657c593b810cefe62a3f097f56e23dbd7a5faa329c WatchSource:0}: Error finding container 7d994f960b5b60a213ac1a657c593b810cefe62a3f097f56e23dbd7a5faa329c: Status 404 returned error can't find the container with id 7d994f960b5b60a213ac1a657c593b810cefe62a3f097f56e23dbd7a5faa329c Oct 06 07:34:27 crc kubenswrapper[4769]: I1006 07:34:27.634232 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15a6802a-063e-46f1-ad58-4605d677377c","Type":"ContainerStarted","Data":"7d994f960b5b60a213ac1a657c593b810cefe62a3f097f56e23dbd7a5faa329c"} Oct 06 07:34:28 crc kubenswrapper[4769]: I1006 07:34:28.177055 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dadcedc-6861-47a3-a7db-8eb843ff98e5" path="/var/lib/kubelet/pods/5dadcedc-6861-47a3-a7db-8eb843ff98e5/volumes" Oct 06 07:34:28 crc kubenswrapper[4769]: I1006 07:34:28.654362 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15a6802a-063e-46f1-ad58-4605d677377c","Type":"ContainerStarted","Data":"06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe"} Oct 06 07:34:28 crc kubenswrapper[4769]: I1006 07:34:28.654413 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15a6802a-063e-46f1-ad58-4605d677377c","Type":"ContainerStarted","Data":"811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb"} Oct 06 07:34:28 crc kubenswrapper[4769]: I1006 07:34:28.692516 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.692482757 podStartE2EDuration="2.692482757s" podCreationTimestamp="2025-10-06 07:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:34:28.68015137 +0000 UTC m=+1065.204432537" watchObservedRunningTime="2025-10-06 07:34:28.692482757 +0000 UTC m=+1065.216763914" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.055552 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.136364 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-sg-core-conf-yaml\") pod \"fd9d0680-9974-4cce-b453-715607bba6ff\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.136543 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd9d0680-9974-4cce-b453-715607bba6ff-run-httpd\") pod \"fd9d0680-9974-4cce-b453-715607bba6ff\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.136643 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87m4j\" (UniqueName: \"kubernetes.io/projected/fd9d0680-9974-4cce-b453-715607bba6ff-kube-api-access-87m4j\") pod \"fd9d0680-9974-4cce-b453-715607bba6ff\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.136793 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-config-data\") pod \"fd9d0680-9974-4cce-b453-715607bba6ff\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.136887 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-scripts\") pod \"fd9d0680-9974-4cce-b453-715607bba6ff\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.136941 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-combined-ca-bundle\") pod \"fd9d0680-9974-4cce-b453-715607bba6ff\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.137188 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd9d0680-9974-4cce-b453-715607bba6ff-log-httpd\") pod \"fd9d0680-9974-4cce-b453-715607bba6ff\" (UID: \"fd9d0680-9974-4cce-b453-715607bba6ff\") " Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.137361 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9d0680-9974-4cce-b453-715607bba6ff-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fd9d0680-9974-4cce-b453-715607bba6ff" (UID: "fd9d0680-9974-4cce-b453-715607bba6ff"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.137701 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9d0680-9974-4cce-b453-715607bba6ff-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fd9d0680-9974-4cce-b453-715607bba6ff" (UID: "fd9d0680-9974-4cce-b453-715607bba6ff"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.137959 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd9d0680-9974-4cce-b453-715607bba6ff-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.137982 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fd9d0680-9974-4cce-b453-715607bba6ff-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.141530 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-scripts" (OuterVolumeSpecName: "scripts") pod "fd9d0680-9974-4cce-b453-715607bba6ff" (UID: "fd9d0680-9974-4cce-b453-715607bba6ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.141814 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9d0680-9974-4cce-b453-715607bba6ff-kube-api-access-87m4j" (OuterVolumeSpecName: "kube-api-access-87m4j") pod "fd9d0680-9974-4cce-b453-715607bba6ff" (UID: "fd9d0680-9974-4cce-b453-715607bba6ff"). InnerVolumeSpecName "kube-api-access-87m4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.167865 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fd9d0680-9974-4cce-b453-715607bba6ff" (UID: "fd9d0680-9974-4cce-b453-715607bba6ff"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.218124 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd9d0680-9974-4cce-b453-715607bba6ff" (UID: "fd9d0680-9974-4cce-b453-715607bba6ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.240532 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87m4j\" (UniqueName: \"kubernetes.io/projected/fd9d0680-9974-4cce-b453-715607bba6ff-kube-api-access-87m4j\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.240574 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.240584 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.240594 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.255620 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-config-data" (OuterVolumeSpecName: "config-data") pod "fd9d0680-9974-4cce-b453-715607bba6ff" (UID: "fd9d0680-9974-4cce-b453-715607bba6ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.342290 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9d0680-9974-4cce-b453-715607bba6ff-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.666867 4769 generic.go:334] "Generic (PLEG): container finished" podID="fd9d0680-9974-4cce-b453-715607bba6ff" containerID="f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60" exitCode=0 Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.666948 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd9d0680-9974-4cce-b453-715607bba6ff","Type":"ContainerDied","Data":"f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60"} Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.666992 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.667060 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fd9d0680-9974-4cce-b453-715607bba6ff","Type":"ContainerDied","Data":"7189950a2d65268f6f281f5a5484bbec54771823c0afeebcb8485448bca9dae6"} Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.667103 4769 scope.go:117] "RemoveContainer" containerID="f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.699994 4769 scope.go:117] "RemoveContainer" containerID="e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.708109 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.724766 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.746525 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:34:29 crc kubenswrapper[4769]: E1006 07:34:29.746930 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="ceilometer-central-agent" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.746946 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="ceilometer-central-agent" Oct 06 07:34:29 crc kubenswrapper[4769]: E1006 07:34:29.746971 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="ceilometer-notification-agent" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.746977 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="ceilometer-notification-agent" Oct 06 07:34:29 crc kubenswrapper[4769]: E1006 07:34:29.746993 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="sg-core" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.746999 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="sg-core" Oct 06 07:34:29 crc kubenswrapper[4769]: E1006 07:34:29.747009 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="proxy-httpd" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.747014 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="proxy-httpd" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.747187 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="sg-core" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.747204 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="ceilometer-central-agent" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.747220 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="ceilometer-notification-agent" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.747230 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" containerName="proxy-httpd" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.748849 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.752757 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.752808 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.755495 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.760831 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.767868 4769 scope.go:117] "RemoveContainer" containerID="f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.796784 4769 scope.go:117] "RemoveContainer" containerID="ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.814255 4769 scope.go:117] "RemoveContainer" containerID="f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054" Oct 06 07:34:29 crc kubenswrapper[4769]: E1006 07:34:29.814589 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054\": container with ID starting with f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054 not found: ID does not exist" containerID="f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.814616 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054"} err="failed to get container status \"f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054\": rpc error: code = NotFound desc = could not find container \"f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054\": container with ID starting with f86a8119cad3d52629569949109d7843d0b1decb50e324bfd15b7dd41b649054 not found: ID does not exist" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.814636 4769 scope.go:117] "RemoveContainer" containerID="e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01" Oct 06 07:34:29 crc kubenswrapper[4769]: E1006 07:34:29.814951 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01\": container with ID starting with e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01 not found: ID does not exist" containerID="e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.814974 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01"} err="failed to get container status \"e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01\": rpc error: code = NotFound desc = could not find container \"e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01\": container with ID starting with e2644070fa52b725e62cd79ac31eaedae9395d922eb569cfb730a7b277560b01 not found: ID does not exist" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.814986 4769 scope.go:117] "RemoveContainer" containerID="f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60" Oct 06 07:34:29 crc kubenswrapper[4769]: E1006 07:34:29.815278 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60\": container with ID starting with f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60 not found: ID does not exist" containerID="f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.815299 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60"} err="failed to get container status \"f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60\": rpc error: code = NotFound desc = could not find container \"f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60\": container with ID starting with f02bbdbde1ccdeb3ab45f58aeb2ded0e25a47affd169fba0ea3dde197135db60 not found: ID does not exist" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.815312 4769 scope.go:117] "RemoveContainer" containerID="ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3" Oct 06 07:34:29 crc kubenswrapper[4769]: E1006 07:34:29.815594 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3\": container with ID starting with ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3 not found: ID does not exist" containerID="ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.815618 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3"} err="failed to get container status \"ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3\": rpc error: code = NotFound desc = could not find container \"ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3\": container with ID starting with ce355f8cab91e224c7f0a357a7ab51ab26a9653ce9e5e065e5c5e44f54c638e3 not found: ID does not exist" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.852060 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.852099 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-scripts\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.852149 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-config-data\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.852247 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76a29e45-4076-42dd-801b-2fc41d73ea04-log-httpd\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.852321 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76a29e45-4076-42dd-801b-2fc41d73ea04-run-httpd\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.852464 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.852494 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scfhs\" (UniqueName: \"kubernetes.io/projected/76a29e45-4076-42dd-801b-2fc41d73ea04-kube-api-access-scfhs\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.852607 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.954455 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76a29e45-4076-42dd-801b-2fc41d73ea04-log-httpd\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.954511 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76a29e45-4076-42dd-801b-2fc41d73ea04-run-httpd\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.954574 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.954601 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scfhs\" (UniqueName: \"kubernetes.io/projected/76a29e45-4076-42dd-801b-2fc41d73ea04-kube-api-access-scfhs\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.954645 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.954733 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.954770 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-scripts\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.954847 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-config-data\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.955036 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76a29e45-4076-42dd-801b-2fc41d73ea04-log-httpd\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.955876 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76a29e45-4076-42dd-801b-2fc41d73ea04-run-httpd\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.959972 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-config-data\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.960649 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-scripts\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.961674 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.962889 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.964121 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:29 crc kubenswrapper[4769]: I1006 07:34:29.976564 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scfhs\" (UniqueName: \"kubernetes.io/projected/76a29e45-4076-42dd-801b-2fc41d73ea04-kube-api-access-scfhs\") pod \"ceilometer-0\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " pod="openstack/ceilometer-0" Oct 06 07:34:30 crc kubenswrapper[4769]: I1006 07:34:30.073807 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:34:30 crc kubenswrapper[4769]: I1006 07:34:30.199634 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9d0680-9974-4cce-b453-715607bba6ff" path="/var/lib/kubelet/pods/fd9d0680-9974-4cce-b453-715607bba6ff/volumes" Oct 06 07:34:30 crc kubenswrapper[4769]: E1006 07:34:30.305783 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c81ea12_eef8_4593_9181_078ce593c881.slice/crio-conmon-2f9b712ea83532a44315216eb69cb48f479b8c58f920d04357e5d791746cbf6d.scope\": RecentStats: unable to find data in memory cache]" Oct 06 07:34:30 crc kubenswrapper[4769]: I1006 07:34:30.543097 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:34:30 crc kubenswrapper[4769]: W1006 07:34:30.557743 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76a29e45_4076_42dd_801b_2fc41d73ea04.slice/crio-98420e17e8f50b07a0738d26a681284447c8bc1e95eb3b7f7fc11f1d7daaaff9 WatchSource:0}: Error finding container 98420e17e8f50b07a0738d26a681284447c8bc1e95eb3b7f7fc11f1d7daaaff9: Status 404 returned error can't find the container with id 98420e17e8f50b07a0738d26a681284447c8bc1e95eb3b7f7fc11f1d7daaaff9 Oct 06 07:34:30 crc kubenswrapper[4769]: I1006 07:34:30.677624 4769 generic.go:334] "Generic (PLEG): container finished" podID="529a47ef-9cb3-4d79-9227-66910a7389e9" containerID="eda3c60fd092850f9b4f9387e36dfc0bd4ce5830f25ba159ef40f9df955560cb" exitCode=0 Oct 06 07:34:30 crc kubenswrapper[4769]: I1006 07:34:30.677671 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p94mc" event={"ID":"529a47ef-9cb3-4d79-9227-66910a7389e9","Type":"ContainerDied","Data":"eda3c60fd092850f9b4f9387e36dfc0bd4ce5830f25ba159ef40f9df955560cb"} Oct 06 07:34:30 crc kubenswrapper[4769]: I1006 07:34:30.678657 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76a29e45-4076-42dd-801b-2fc41d73ea04","Type":"ContainerStarted","Data":"98420e17e8f50b07a0738d26a681284447c8bc1e95eb3b7f7fc11f1d7daaaff9"} Oct 06 07:34:31 crc kubenswrapper[4769]: I1006 07:34:31.036413 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 07:34:31 crc kubenswrapper[4769]: I1006 07:34:31.036765 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 07:34:31 crc kubenswrapper[4769]: I1006 07:34:31.074766 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 06 07:34:31 crc kubenswrapper[4769]: I1006 07:34:31.104063 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 06 07:34:31 crc kubenswrapper[4769]: I1006 07:34:31.251535 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:34:31 crc kubenswrapper[4769]: I1006 07:34:31.316147 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586c7c99fc-8l5l6"] Oct 06 07:34:31 crc kubenswrapper[4769]: I1006 07:34:31.688339 4769 generic.go:334] "Generic (PLEG): container finished" podID="6c3b8223-fede-4508-a234-c32e9cc406c5" containerID="b92a3e0e92d427e1c3033bf48992612173b7ddbb4ce094506db95c30afc9d34b" exitCode=0 Oct 06 07:34:31 crc kubenswrapper[4769]: I1006 07:34:31.688393 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-csxd9" event={"ID":"6c3b8223-fede-4508-a234-c32e9cc406c5","Type":"ContainerDied","Data":"b92a3e0e92d427e1c3033bf48992612173b7ddbb4ce094506db95c30afc9d34b"} Oct 06 07:34:31 crc kubenswrapper[4769]: I1006 07:34:31.690613 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76a29e45-4076-42dd-801b-2fc41d73ea04","Type":"ContainerStarted","Data":"87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda"} Oct 06 07:34:31 crc kubenswrapper[4769]: I1006 07:34:31.690667 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76a29e45-4076-42dd-801b-2fc41d73ea04","Type":"ContainerStarted","Data":"1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f"} Oct 06 07:34:31 crc kubenswrapper[4769]: I1006 07:34:31.690731 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" podUID="5f55324f-ebdf-459f-aa92-abfeec6a6755" containerName="dnsmasq-dns" containerID="cri-o://b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a" gracePeriod=10 Oct 06 07:34:31 crc kubenswrapper[4769]: I1006 07:34:31.724375 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.013146 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.042201 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.043031 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.104083 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.125553 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.125614 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.204013 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-scripts\") pod \"529a47ef-9cb3-4d79-9227-66910a7389e9\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.204449 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk7b8\" (UniqueName: \"kubernetes.io/projected/529a47ef-9cb3-4d79-9227-66910a7389e9-kube-api-access-hk7b8\") pod \"529a47ef-9cb3-4d79-9227-66910a7389e9\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.204528 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-combined-ca-bundle\") pod \"529a47ef-9cb3-4d79-9227-66910a7389e9\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.204547 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-config-data\") pod \"529a47ef-9cb3-4d79-9227-66910a7389e9\" (UID: \"529a47ef-9cb3-4d79-9227-66910a7389e9\") " Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.245771 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-scripts" (OuterVolumeSpecName: "scripts") pod "529a47ef-9cb3-4d79-9227-66910a7389e9" (UID: "529a47ef-9cb3-4d79-9227-66910a7389e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.248121 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529a47ef-9cb3-4d79-9227-66910a7389e9-kube-api-access-hk7b8" (OuterVolumeSpecName: "kube-api-access-hk7b8") pod "529a47ef-9cb3-4d79-9227-66910a7389e9" (UID: "529a47ef-9cb3-4d79-9227-66910a7389e9"). InnerVolumeSpecName "kube-api-access-hk7b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.248195 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "529a47ef-9cb3-4d79-9227-66910a7389e9" (UID: "529a47ef-9cb3-4d79-9227-66910a7389e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.251915 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-config-data" (OuterVolumeSpecName: "config-data") pod "529a47ef-9cb3-4d79-9227-66910a7389e9" (UID: "529a47ef-9cb3-4d79-9227-66910a7389e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.305682 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk7b8\" (UniqueName: \"kubernetes.io/projected/529a47ef-9cb3-4d79-9227-66910a7389e9-kube-api-access-hk7b8\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.305713 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.305723 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.305733 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/529a47ef-9cb3-4d79-9227-66910a7389e9-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.328449 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.406446 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-dns-swift-storage-0\") pod \"5f55324f-ebdf-459f-aa92-abfeec6a6755\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.406501 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b446q\" (UniqueName: \"kubernetes.io/projected/5f55324f-ebdf-459f-aa92-abfeec6a6755-kube-api-access-b446q\") pod \"5f55324f-ebdf-459f-aa92-abfeec6a6755\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.406573 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-ovsdbserver-sb\") pod \"5f55324f-ebdf-459f-aa92-abfeec6a6755\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.406725 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-config\") pod \"5f55324f-ebdf-459f-aa92-abfeec6a6755\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.406769 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-dns-svc\") pod \"5f55324f-ebdf-459f-aa92-abfeec6a6755\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.406804 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-ovsdbserver-nb\") pod \"5f55324f-ebdf-459f-aa92-abfeec6a6755\" (UID: \"5f55324f-ebdf-459f-aa92-abfeec6a6755\") " Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.410667 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f55324f-ebdf-459f-aa92-abfeec6a6755-kube-api-access-b446q" (OuterVolumeSpecName: "kube-api-access-b446q") pod "5f55324f-ebdf-459f-aa92-abfeec6a6755" (UID: "5f55324f-ebdf-459f-aa92-abfeec6a6755"). InnerVolumeSpecName "kube-api-access-b446q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.461358 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5f55324f-ebdf-459f-aa92-abfeec6a6755" (UID: "5f55324f-ebdf-459f-aa92-abfeec6a6755"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.468321 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5f55324f-ebdf-459f-aa92-abfeec6a6755" (UID: "5f55324f-ebdf-459f-aa92-abfeec6a6755"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.474397 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5f55324f-ebdf-459f-aa92-abfeec6a6755" (UID: "5f55324f-ebdf-459f-aa92-abfeec6a6755"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.483247 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-config" (OuterVolumeSpecName: "config") pod "5f55324f-ebdf-459f-aa92-abfeec6a6755" (UID: "5f55324f-ebdf-459f-aa92-abfeec6a6755"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.485360 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5f55324f-ebdf-459f-aa92-abfeec6a6755" (UID: "5f55324f-ebdf-459f-aa92-abfeec6a6755"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.510651 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.510696 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b446q\" (UniqueName: \"kubernetes.io/projected/5f55324f-ebdf-459f-aa92-abfeec6a6755-kube-api-access-b446q\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.510706 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.510715 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.510723 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.510731 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5f55324f-ebdf-459f-aa92-abfeec6a6755-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.698688 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p94mc" event={"ID":"529a47ef-9cb3-4d79-9227-66910a7389e9","Type":"ContainerDied","Data":"2714aeab93b7eea63e879d94422412b53baab53d39f52d668dba779c7a762a8e"} Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.698723 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2714aeab93b7eea63e879d94422412b53baab53d39f52d668dba779c7a762a8e" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.698772 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p94mc" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.704767 4769 generic.go:334] "Generic (PLEG): container finished" podID="5f55324f-ebdf-459f-aa92-abfeec6a6755" containerID="b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a" exitCode=0 Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.704829 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" event={"ID":"5f55324f-ebdf-459f-aa92-abfeec6a6755","Type":"ContainerDied","Data":"b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a"} Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.704855 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" event={"ID":"5f55324f-ebdf-459f-aa92-abfeec6a6755","Type":"ContainerDied","Data":"d5cc7554548757e16a722243afb4b1528d93fed2f13a1f8c38bfac4b1d157dc2"} Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.704869 4769 scope.go:117] "RemoveContainer" containerID="b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.704983 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586c7c99fc-8l5l6" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.715526 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76a29e45-4076-42dd-801b-2fc41d73ea04","Type":"ContainerStarted","Data":"6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3"} Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.758681 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586c7c99fc-8l5l6"] Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.761516 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586c7c99fc-8l5l6"] Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.791117 4769 scope.go:117] "RemoveContainer" containerID="c656aaa7a1c0e1e55ddb3f763cc21f89cfbde3b1f458f5dfa18fcfc558db7cda" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.831092 4769 scope.go:117] "RemoveContainer" containerID="b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a" Oct 06 07:34:32 crc kubenswrapper[4769]: E1006 07:34:32.838022 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a\": container with ID starting with b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a not found: ID does not exist" containerID="b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.838071 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a"} err="failed to get container status \"b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a\": rpc error: code = NotFound desc = could not find container \"b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a\": container with ID starting with b6b3898f03956f745340c0a7ac682d3669b0c110e31a106bd4f8fd055d77529a not found: ID does not exist" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.838102 4769 scope.go:117] "RemoveContainer" containerID="c656aaa7a1c0e1e55ddb3f763cc21f89cfbde3b1f458f5dfa18fcfc558db7cda" Oct 06 07:34:32 crc kubenswrapper[4769]: E1006 07:34:32.838760 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c656aaa7a1c0e1e55ddb3f763cc21f89cfbde3b1f458f5dfa18fcfc558db7cda\": container with ID starting with c656aaa7a1c0e1e55ddb3f763cc21f89cfbde3b1f458f5dfa18fcfc558db7cda not found: ID does not exist" containerID="c656aaa7a1c0e1e55ddb3f763cc21f89cfbde3b1f458f5dfa18fcfc558db7cda" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.838804 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c656aaa7a1c0e1e55ddb3f763cc21f89cfbde3b1f458f5dfa18fcfc558db7cda"} err="failed to get container status \"c656aaa7a1c0e1e55ddb3f763cc21f89cfbde3b1f458f5dfa18fcfc558db7cda\": rpc error: code = NotFound desc = could not find container \"c656aaa7a1c0e1e55ddb3f763cc21f89cfbde3b1f458f5dfa18fcfc558db7cda\": container with ID starting with c656aaa7a1c0e1e55ddb3f763cc21f89cfbde3b1f458f5dfa18fcfc558db7cda not found: ID does not exist" Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.880926 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.881125 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" containerName="nova-api-log" containerID="cri-o://74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf" gracePeriod=30 Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.881609 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" containerName="nova-api-api" containerID="cri-o://8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05" gracePeriod=30 Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.911562 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:34:32 crc kubenswrapper[4769]: I1006 07:34:32.937240 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.139377 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.220242 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-scripts\") pod \"6c3b8223-fede-4508-a234-c32e9cc406c5\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.220342 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2h2m\" (UniqueName: \"kubernetes.io/projected/6c3b8223-fede-4508-a234-c32e9cc406c5-kube-api-access-m2h2m\") pod \"6c3b8223-fede-4508-a234-c32e9cc406c5\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.220459 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-combined-ca-bundle\") pod \"6c3b8223-fede-4508-a234-c32e9cc406c5\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.220520 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-config-data\") pod \"6c3b8223-fede-4508-a234-c32e9cc406c5\" (UID: \"6c3b8223-fede-4508-a234-c32e9cc406c5\") " Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.227698 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3b8223-fede-4508-a234-c32e9cc406c5-kube-api-access-m2h2m" (OuterVolumeSpecName: "kube-api-access-m2h2m") pod "6c3b8223-fede-4508-a234-c32e9cc406c5" (UID: "6c3b8223-fede-4508-a234-c32e9cc406c5"). InnerVolumeSpecName "kube-api-access-m2h2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.248772 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-scripts" (OuterVolumeSpecName: "scripts") pod "6c3b8223-fede-4508-a234-c32e9cc406c5" (UID: "6c3b8223-fede-4508-a234-c32e9cc406c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.268758 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-config-data" (OuterVolumeSpecName: "config-data") pod "6c3b8223-fede-4508-a234-c32e9cc406c5" (UID: "6c3b8223-fede-4508-a234-c32e9cc406c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.278936 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c3b8223-fede-4508-a234-c32e9cc406c5" (UID: "6c3b8223-fede-4508-a234-c32e9cc406c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.324301 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.324338 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.324349 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c3b8223-fede-4508-a234-c32e9cc406c5-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.324358 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2h2m\" (UniqueName: \"kubernetes.io/projected/6c3b8223-fede-4508-a234-c32e9cc406c5-kube-api-access-m2h2m\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.728547 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-csxd9" event={"ID":"6c3b8223-fede-4508-a234-c32e9cc406c5","Type":"ContainerDied","Data":"6397f6a9be807fac49a90f0c3e544adae136d2c0b13df54e3b4d3bb7acc2b110"} Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.728865 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6397f6a9be807fac49a90f0c3e544adae136d2c0b13df54e3b4d3bb7acc2b110" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.728592 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-csxd9" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.732144 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76a29e45-4076-42dd-801b-2fc41d73ea04","Type":"ContainerStarted","Data":"0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3"} Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.732213 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.734516 4769 generic.go:334] "Generic (PLEG): container finished" podID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" containerID="74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf" exitCode=143 Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.734687 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6","Type":"ContainerDied","Data":"74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf"} Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.734726 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="15a6802a-063e-46f1-ad58-4605d677377c" containerName="nova-metadata-log" containerID="cri-o://811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb" gracePeriod=30 Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.734819 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="15a6802a-063e-46f1-ad58-4605d677377c" containerName="nova-metadata-metadata" containerID="cri-o://06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe" gracePeriod=30 Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.734843 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e99018fb-2572-49ab-80e7-e6797a99a442" containerName="nova-scheduler-scheduler" containerID="cri-o://f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af" gracePeriod=30 Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.776988 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.019613744 podStartE2EDuration="4.776969708s" podCreationTimestamp="2025-10-06 07:34:29 +0000 UTC" firstStartedPulling="2025-10-06 07:34:30.573860571 +0000 UTC m=+1067.098141718" lastFinishedPulling="2025-10-06 07:34:33.331216535 +0000 UTC m=+1069.855497682" observedRunningTime="2025-10-06 07:34:33.767812447 +0000 UTC m=+1070.292093594" watchObservedRunningTime="2025-10-06 07:34:33.776969708 +0000 UTC m=+1070.301250855" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.811918 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 06 07:34:33 crc kubenswrapper[4769]: E1006 07:34:33.812434 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f55324f-ebdf-459f-aa92-abfeec6a6755" containerName="init" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.812454 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f55324f-ebdf-459f-aa92-abfeec6a6755" containerName="init" Oct 06 07:34:33 crc kubenswrapper[4769]: E1006 07:34:33.812499 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f55324f-ebdf-459f-aa92-abfeec6a6755" containerName="dnsmasq-dns" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.812508 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f55324f-ebdf-459f-aa92-abfeec6a6755" containerName="dnsmasq-dns" Oct 06 07:34:33 crc kubenswrapper[4769]: E1006 07:34:33.812526 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529a47ef-9cb3-4d79-9227-66910a7389e9" containerName="nova-manage" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.812534 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="529a47ef-9cb3-4d79-9227-66910a7389e9" containerName="nova-manage" Oct 06 07:34:33 crc kubenswrapper[4769]: E1006 07:34:33.812579 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3b8223-fede-4508-a234-c32e9cc406c5" containerName="nova-cell1-conductor-db-sync" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.812588 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3b8223-fede-4508-a234-c32e9cc406c5" containerName="nova-cell1-conductor-db-sync" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.812805 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="529a47ef-9cb3-4d79-9227-66910a7389e9" containerName="nova-manage" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.812835 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c3b8223-fede-4508-a234-c32e9cc406c5" containerName="nova-cell1-conductor-db-sync" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.812848 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f55324f-ebdf-459f-aa92-abfeec6a6755" containerName="dnsmasq-dns" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.813682 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.815501 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.819356 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.838627 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfnpm\" (UniqueName: \"kubernetes.io/projected/80670f73-8131-40f1-8f91-a9291f87f615-kube-api-access-xfnpm\") pod \"nova-cell1-conductor-0\" (UID: \"80670f73-8131-40f1-8f91-a9291f87f615\") " pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.838854 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80670f73-8131-40f1-8f91-a9291f87f615-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"80670f73-8131-40f1-8f91-a9291f87f615\") " pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.838976 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80670f73-8131-40f1-8f91-a9291f87f615-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"80670f73-8131-40f1-8f91-a9291f87f615\") " pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.946138 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfnpm\" (UniqueName: \"kubernetes.io/projected/80670f73-8131-40f1-8f91-a9291f87f615-kube-api-access-xfnpm\") pod \"nova-cell1-conductor-0\" (UID: \"80670f73-8131-40f1-8f91-a9291f87f615\") " pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.946218 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80670f73-8131-40f1-8f91-a9291f87f615-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"80670f73-8131-40f1-8f91-a9291f87f615\") " pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.946268 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80670f73-8131-40f1-8f91-a9291f87f615-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"80670f73-8131-40f1-8f91-a9291f87f615\") " pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.950977 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80670f73-8131-40f1-8f91-a9291f87f615-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"80670f73-8131-40f1-8f91-a9291f87f615\") " pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.954509 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80670f73-8131-40f1-8f91-a9291f87f615-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"80670f73-8131-40f1-8f91-a9291f87f615\") " pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:33 crc kubenswrapper[4769]: I1006 07:34:33.963595 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfnpm\" (UniqueName: \"kubernetes.io/projected/80670f73-8131-40f1-8f91-a9291f87f615-kube-api-access-xfnpm\") pod \"nova-cell1-conductor-0\" (UID: \"80670f73-8131-40f1-8f91-a9291f87f615\") " pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.128022 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.214224 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f55324f-ebdf-459f-aa92-abfeec6a6755" path="/var/lib/kubelet/pods/5f55324f-ebdf-459f-aa92-abfeec6a6755/volumes" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.325957 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.352695 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-combined-ca-bundle\") pod \"15a6802a-063e-46f1-ad58-4605d677377c\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.352818 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9cfb\" (UniqueName: \"kubernetes.io/projected/15a6802a-063e-46f1-ad58-4605d677377c-kube-api-access-g9cfb\") pod \"15a6802a-063e-46f1-ad58-4605d677377c\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.352919 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-config-data\") pod \"15a6802a-063e-46f1-ad58-4605d677377c\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.352965 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-nova-metadata-tls-certs\") pod \"15a6802a-063e-46f1-ad58-4605d677377c\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.353001 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a6802a-063e-46f1-ad58-4605d677377c-logs\") pod \"15a6802a-063e-46f1-ad58-4605d677377c\" (UID: \"15a6802a-063e-46f1-ad58-4605d677377c\") " Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.353965 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a6802a-063e-46f1-ad58-4605d677377c-logs" (OuterVolumeSpecName: "logs") pod "15a6802a-063e-46f1-ad58-4605d677377c" (UID: "15a6802a-063e-46f1-ad58-4605d677377c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.358475 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a6802a-063e-46f1-ad58-4605d677377c-kube-api-access-g9cfb" (OuterVolumeSpecName: "kube-api-access-g9cfb") pod "15a6802a-063e-46f1-ad58-4605d677377c" (UID: "15a6802a-063e-46f1-ad58-4605d677377c"). InnerVolumeSpecName "kube-api-access-g9cfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.406865 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-config-data" (OuterVolumeSpecName: "config-data") pod "15a6802a-063e-46f1-ad58-4605d677377c" (UID: "15a6802a-063e-46f1-ad58-4605d677377c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.408274 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "15a6802a-063e-46f1-ad58-4605d677377c" (UID: "15a6802a-063e-46f1-ad58-4605d677377c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.408518 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15a6802a-063e-46f1-ad58-4605d677377c" (UID: "15a6802a-063e-46f1-ad58-4605d677377c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.455581 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.455623 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9cfb\" (UniqueName: \"kubernetes.io/projected/15a6802a-063e-46f1-ad58-4605d677377c-kube-api-access-g9cfb\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.455635 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.455643 4769 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a6802a-063e-46f1-ad58-4605d677377c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.455652 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a6802a-063e-46f1-ad58-4605d677377c-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.676244 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 06 07:34:34 crc kubenswrapper[4769]: W1006 07:34:34.681216 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80670f73_8131_40f1_8f91_a9291f87f615.slice/crio-cdf8874f3ebfa58e9af010c9924959bca1b1a933a39d501ae518ddb4faa325c2 WatchSource:0}: Error finding container cdf8874f3ebfa58e9af010c9924959bca1b1a933a39d501ae518ddb4faa325c2: Status 404 returned error can't find the container with id cdf8874f3ebfa58e9af010c9924959bca1b1a933a39d501ae518ddb4faa325c2 Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.747784 4769 generic.go:334] "Generic (PLEG): container finished" podID="15a6802a-063e-46f1-ad58-4605d677377c" containerID="06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe" exitCode=0 Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.747825 4769 generic.go:334] "Generic (PLEG): container finished" podID="15a6802a-063e-46f1-ad58-4605d677377c" containerID="811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb" exitCode=143 Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.747869 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15a6802a-063e-46f1-ad58-4605d677377c","Type":"ContainerDied","Data":"06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe"} Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.747895 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15a6802a-063e-46f1-ad58-4605d677377c","Type":"ContainerDied","Data":"811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb"} Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.747906 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"15a6802a-063e-46f1-ad58-4605d677377c","Type":"ContainerDied","Data":"7d994f960b5b60a213ac1a657c593b810cefe62a3f097f56e23dbd7a5faa329c"} Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.747906 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.747920 4769 scope.go:117] "RemoveContainer" containerID="06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.753238 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"80670f73-8131-40f1-8f91-a9291f87f615","Type":"ContainerStarted","Data":"cdf8874f3ebfa58e9af010c9924959bca1b1a933a39d501ae518ddb4faa325c2"} Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.772791 4769 scope.go:117] "RemoveContainer" containerID="811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.797685 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.799846 4769 scope.go:117] "RemoveContainer" containerID="06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe" Oct 06 07:34:34 crc kubenswrapper[4769]: E1006 07:34:34.800221 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe\": container with ID starting with 06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe not found: ID does not exist" containerID="06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.800269 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe"} err="failed to get container status \"06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe\": rpc error: code = NotFound desc = could not find container \"06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe\": container with ID starting with 06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe not found: ID does not exist" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.800298 4769 scope.go:117] "RemoveContainer" containerID="811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb" Oct 06 07:34:34 crc kubenswrapper[4769]: E1006 07:34:34.800512 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb\": container with ID starting with 811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb not found: ID does not exist" containerID="811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.800536 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb"} err="failed to get container status \"811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb\": rpc error: code = NotFound desc = could not find container \"811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb\": container with ID starting with 811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb not found: ID does not exist" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.800551 4769 scope.go:117] "RemoveContainer" containerID="06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.800717 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe"} err="failed to get container status \"06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe\": rpc error: code = NotFound desc = could not find container \"06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe\": container with ID starting with 06057ae22854fe36ee3c3777a03481dee297a43513a7cb246399fe74541867fe not found: ID does not exist" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.800738 4769 scope.go:117] "RemoveContainer" containerID="811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.800919 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb"} err="failed to get container status \"811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb\": rpc error: code = NotFound desc = could not find container \"811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb\": container with ID starting with 811f96dea4b8b7e5efc2c550826a382a241c022c53473a7d4557f005c7cb64bb not found: ID does not exist" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.806926 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.830109 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:34 crc kubenswrapper[4769]: E1006 07:34:34.830642 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a6802a-063e-46f1-ad58-4605d677377c" containerName="nova-metadata-log" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.830660 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a6802a-063e-46f1-ad58-4605d677377c" containerName="nova-metadata-log" Oct 06 07:34:34 crc kubenswrapper[4769]: E1006 07:34:34.830700 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a6802a-063e-46f1-ad58-4605d677377c" containerName="nova-metadata-metadata" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.830708 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a6802a-063e-46f1-ad58-4605d677377c" containerName="nova-metadata-metadata" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.830929 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a6802a-063e-46f1-ad58-4605d677377c" containerName="nova-metadata-metadata" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.830953 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a6802a-063e-46f1-ad58-4605d677377c" containerName="nova-metadata-log" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.832532 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.839816 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.840157 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.860405 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.860542 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbgcl\" (UniqueName: \"kubernetes.io/projected/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-kube-api-access-rbgcl\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.860582 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.860606 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-logs\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.860632 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-config-data\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.862983 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.963259 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbgcl\" (UniqueName: \"kubernetes.io/projected/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-kube-api-access-rbgcl\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.963310 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.963337 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-logs\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.963362 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-config-data\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.963406 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.964648 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-logs\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.967639 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.968141 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-config-data\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.968668 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:34 crc kubenswrapper[4769]: I1006 07:34:34.980935 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbgcl\" (UniqueName: \"kubernetes.io/projected/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-kube-api-access-rbgcl\") pod \"nova-metadata-0\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " pod="openstack/nova-metadata-0" Oct 06 07:34:35 crc kubenswrapper[4769]: I1006 07:34:35.169708 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:34:35 crc kubenswrapper[4769]: I1006 07:34:35.741332 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:34:35 crc kubenswrapper[4769]: I1006 07:34:35.783766 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"80670f73-8131-40f1-8f91-a9291f87f615","Type":"ContainerStarted","Data":"935a9e687f07958978238ac8b3b032585a2ef29708dd21777c67d37c853ffe87"} Oct 06 07:34:35 crc kubenswrapper[4769]: I1006 07:34:35.785014 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:35 crc kubenswrapper[4769]: I1006 07:34:35.786208 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e","Type":"ContainerStarted","Data":"76c15f0f7ae0e3d5ea3518ed6585ad41357c3d8307ab26d92dd32a49d756abdb"} Oct 06 07:34:35 crc kubenswrapper[4769]: I1006 07:34:35.829227 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.829210205 podStartE2EDuration="2.829210205s" podCreationTimestamp="2025-10-06 07:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:34:35.821915725 +0000 UTC m=+1072.346196862" watchObservedRunningTime="2025-10-06 07:34:35.829210205 +0000 UTC m=+1072.353491352" Oct 06 07:34:36 crc kubenswrapper[4769]: E1006 07:34:36.080684 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 07:34:36 crc kubenswrapper[4769]: E1006 07:34:36.089892 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 07:34:36 crc kubenswrapper[4769]: E1006 07:34:36.093733 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 07:34:36 crc kubenswrapper[4769]: E1006 07:34:36.093790 4769 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e99018fb-2572-49ab-80e7-e6797a99a442" containerName="nova-scheduler-scheduler" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.162858 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.199886 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a6802a-063e-46f1-ad58-4605d677377c" path="/var/lib/kubelet/pods/15a6802a-063e-46f1-ad58-4605d677377c/volumes" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.292381 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-logs\") pod \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.292747 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-combined-ca-bundle\") pod \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.292821 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-config-data\") pod \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.292927 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6lbz\" (UniqueName: \"kubernetes.io/projected/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-kube-api-access-s6lbz\") pod \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\" (UID: \"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6\") " Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.292970 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-logs" (OuterVolumeSpecName: "logs") pod "52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" (UID: "52d116a3-4e9b-45c2-8a8e-e58bef7fccf6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.293485 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.310490 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-kube-api-access-s6lbz" (OuterVolumeSpecName: "kube-api-access-s6lbz") pod "52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" (UID: "52d116a3-4e9b-45c2-8a8e-e58bef7fccf6"). InnerVolumeSpecName "kube-api-access-s6lbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.323566 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" (UID: "52d116a3-4e9b-45c2-8a8e-e58bef7fccf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.324701 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-config-data" (OuterVolumeSpecName: "config-data") pod "52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" (UID: "52d116a3-4e9b-45c2-8a8e-e58bef7fccf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.395516 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6lbz\" (UniqueName: \"kubernetes.io/projected/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-kube-api-access-s6lbz\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.395552 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.395563 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.817574 4769 generic.go:334] "Generic (PLEG): container finished" podID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" containerID="8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05" exitCode=0 Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.817622 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6","Type":"ContainerDied","Data":"8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05"} Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.817656 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.817678 4769 scope.go:117] "RemoveContainer" containerID="8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.817665 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"52d116a3-4e9b-45c2-8a8e-e58bef7fccf6","Type":"ContainerDied","Data":"1b20f91817d565ef9a4e08076c1054d355e39f2c2f7568d7be8737af9a667b10"} Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.826832 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e","Type":"ContainerStarted","Data":"9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273"} Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.826881 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e","Type":"ContainerStarted","Data":"7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e"} Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.853323 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.853305669 podStartE2EDuration="2.853305669s" podCreationTimestamp="2025-10-06 07:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:34:36.848290321 +0000 UTC m=+1073.372571468" watchObservedRunningTime="2025-10-06 07:34:36.853305669 +0000 UTC m=+1073.377586816" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.865838 4769 scope.go:117] "RemoveContainer" containerID="74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.867989 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.896700 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.908670 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 07:34:36 crc kubenswrapper[4769]: E1006 07:34:36.909072 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" containerName="nova-api-api" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.909088 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" containerName="nova-api-api" Oct 06 07:34:36 crc kubenswrapper[4769]: E1006 07:34:36.909119 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" containerName="nova-api-log" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.909125 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" containerName="nova-api-log" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.909302 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" containerName="nova-api-api" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.909326 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" containerName="nova-api-log" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.910301 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.912922 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.924657 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.954769 4769 scope.go:117] "RemoveContainer" containerID="8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05" Oct 06 07:34:36 crc kubenswrapper[4769]: E1006 07:34:36.955384 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05\": container with ID starting with 8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05 not found: ID does not exist" containerID="8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.955412 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05"} err="failed to get container status \"8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05\": rpc error: code = NotFound desc = could not find container \"8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05\": container with ID starting with 8b69fac67ad531b5b3c0dc8b025af2e3237cfcadd6a71e504f420ee61af1cc05 not found: ID does not exist" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.955685 4769 scope.go:117] "RemoveContainer" containerID="74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf" Oct 06 07:34:36 crc kubenswrapper[4769]: E1006 07:34:36.956289 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf\": container with ID starting with 74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf not found: ID does not exist" containerID="74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf" Oct 06 07:34:36 crc kubenswrapper[4769]: I1006 07:34:36.956308 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf"} err="failed to get container status \"74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf\": rpc error: code = NotFound desc = could not find container \"74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf\": container with ID starting with 74fd6e3c46cbba77ec59348ae67505ab990ee237b7346eb2cb7a3896cad5eecf not found: ID does not exist" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.005101 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.005146 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tndm9\" (UniqueName: \"kubernetes.io/projected/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-kube-api-access-tndm9\") pod \"nova-api-0\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.005250 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-logs\") pod \"nova-api-0\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.005319 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-config-data\") pod \"nova-api-0\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.107187 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-logs\") pod \"nova-api-0\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.107275 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-config-data\") pod \"nova-api-0\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.107322 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.107350 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tndm9\" (UniqueName: \"kubernetes.io/projected/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-kube-api-access-tndm9\") pod \"nova-api-0\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.107621 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-logs\") pod \"nova-api-0\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.120342 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.120991 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-config-data\") pod \"nova-api-0\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.125596 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tndm9\" (UniqueName: \"kubernetes.io/projected/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-kube-api-access-tndm9\") pod \"nova-api-0\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.237114 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.307189 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.413277 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99018fb-2572-49ab-80e7-e6797a99a442-combined-ca-bundle\") pod \"e99018fb-2572-49ab-80e7-e6797a99a442\" (UID: \"e99018fb-2572-49ab-80e7-e6797a99a442\") " Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.413316 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99018fb-2572-49ab-80e7-e6797a99a442-config-data\") pod \"e99018fb-2572-49ab-80e7-e6797a99a442\" (UID: \"e99018fb-2572-49ab-80e7-e6797a99a442\") " Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.413395 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7wjc\" (UniqueName: \"kubernetes.io/projected/e99018fb-2572-49ab-80e7-e6797a99a442-kube-api-access-h7wjc\") pod \"e99018fb-2572-49ab-80e7-e6797a99a442\" (UID: \"e99018fb-2572-49ab-80e7-e6797a99a442\") " Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.418558 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e99018fb-2572-49ab-80e7-e6797a99a442-kube-api-access-h7wjc" (OuterVolumeSpecName: "kube-api-access-h7wjc") pod "e99018fb-2572-49ab-80e7-e6797a99a442" (UID: "e99018fb-2572-49ab-80e7-e6797a99a442"). InnerVolumeSpecName "kube-api-access-h7wjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.441256 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e99018fb-2572-49ab-80e7-e6797a99a442-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e99018fb-2572-49ab-80e7-e6797a99a442" (UID: "e99018fb-2572-49ab-80e7-e6797a99a442"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.450096 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e99018fb-2572-49ab-80e7-e6797a99a442-config-data" (OuterVolumeSpecName: "config-data") pod "e99018fb-2572-49ab-80e7-e6797a99a442" (UID: "e99018fb-2572-49ab-80e7-e6797a99a442"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.515628 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e99018fb-2572-49ab-80e7-e6797a99a442-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.515668 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e99018fb-2572-49ab-80e7-e6797a99a442-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.515683 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7wjc\" (UniqueName: \"kubernetes.io/projected/e99018fb-2572-49ab-80e7-e6797a99a442-kube-api-access-h7wjc\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.723162 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:34:37 crc kubenswrapper[4769]: W1006 07:34:37.729532 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d3c8d22_6c66_4cc6_b1a9_7528e8fbbe44.slice/crio-c0cbff6a411ec6a56cfce1ac9573e974ccc8f56c8a4e54488ffd23e8e63e07a7 WatchSource:0}: Error finding container c0cbff6a411ec6a56cfce1ac9573e974ccc8f56c8a4e54488ffd23e8e63e07a7: Status 404 returned error can't find the container with id c0cbff6a411ec6a56cfce1ac9573e974ccc8f56c8a4e54488ffd23e8e63e07a7 Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.848463 4769 generic.go:334] "Generic (PLEG): container finished" podID="e99018fb-2572-49ab-80e7-e6797a99a442" containerID="f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af" exitCode=0 Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.848592 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.849446 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e99018fb-2572-49ab-80e7-e6797a99a442","Type":"ContainerDied","Data":"f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af"} Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.849496 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e99018fb-2572-49ab-80e7-e6797a99a442","Type":"ContainerDied","Data":"4ac2832ccfe6f4cbeccc1a05978dee0707796ee3c7de0f3fa4a681d3997ad936"} Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.849514 4769 scope.go:117] "RemoveContainer" containerID="f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.867750 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44","Type":"ContainerStarted","Data":"c0cbff6a411ec6a56cfce1ac9573e974ccc8f56c8a4e54488ffd23e8e63e07a7"} Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.890205 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.896550 4769 scope.go:117] "RemoveContainer" containerID="f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af" Oct 06 07:34:37 crc kubenswrapper[4769]: E1006 07:34:37.897051 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af\": container with ID starting with f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af not found: ID does not exist" containerID="f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.897105 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af"} err="failed to get container status \"f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af\": rpc error: code = NotFound desc = could not find container \"f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af\": container with ID starting with f6fa9a18d1784cf6197ec730507ec9e5522fd6aa74e3236e6ec71b55d49c93af not found: ID does not exist" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.903546 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.922032 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:34:37 crc kubenswrapper[4769]: E1006 07:34:37.922573 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e99018fb-2572-49ab-80e7-e6797a99a442" containerName="nova-scheduler-scheduler" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.922608 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e99018fb-2572-49ab-80e7-e6797a99a442" containerName="nova-scheduler-scheduler" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.922868 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e99018fb-2572-49ab-80e7-e6797a99a442" containerName="nova-scheduler-scheduler" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.923679 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.926933 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 06 07:34:37 crc kubenswrapper[4769]: I1006 07:34:37.935943 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.023967 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tjsd\" (UniqueName: \"kubernetes.io/projected/8d4d351d-b1dc-4010-988d-08528161f83e-kube-api-access-7tjsd\") pod \"nova-scheduler-0\" (UID: \"8d4d351d-b1dc-4010-988d-08528161f83e\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.024040 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d351d-b1dc-4010-988d-08528161f83e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8d4d351d-b1dc-4010-988d-08528161f83e\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.024075 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4d351d-b1dc-4010-988d-08528161f83e-config-data\") pod \"nova-scheduler-0\" (UID: \"8d4d351d-b1dc-4010-988d-08528161f83e\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.125515 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tjsd\" (UniqueName: \"kubernetes.io/projected/8d4d351d-b1dc-4010-988d-08528161f83e-kube-api-access-7tjsd\") pod \"nova-scheduler-0\" (UID: \"8d4d351d-b1dc-4010-988d-08528161f83e\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.125933 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d351d-b1dc-4010-988d-08528161f83e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8d4d351d-b1dc-4010-988d-08528161f83e\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.127761 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4d351d-b1dc-4010-988d-08528161f83e-config-data\") pod \"nova-scheduler-0\" (UID: \"8d4d351d-b1dc-4010-988d-08528161f83e\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.133063 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d351d-b1dc-4010-988d-08528161f83e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8d4d351d-b1dc-4010-988d-08528161f83e\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.134324 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4d351d-b1dc-4010-988d-08528161f83e-config-data\") pod \"nova-scheduler-0\" (UID: \"8d4d351d-b1dc-4010-988d-08528161f83e\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.148072 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tjsd\" (UniqueName: \"kubernetes.io/projected/8d4d351d-b1dc-4010-988d-08528161f83e-kube-api-access-7tjsd\") pod \"nova-scheduler-0\" (UID: \"8d4d351d-b1dc-4010-988d-08528161f83e\") " pod="openstack/nova-scheduler-0" Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.178333 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52d116a3-4e9b-45c2-8a8e-e58bef7fccf6" path="/var/lib/kubelet/pods/52d116a3-4e9b-45c2-8a8e-e58bef7fccf6/volumes" Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.179102 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e99018fb-2572-49ab-80e7-e6797a99a442" path="/var/lib/kubelet/pods/e99018fb-2572-49ab-80e7-e6797a99a442/volumes" Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.253592 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.694499 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.880658 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8d4d351d-b1dc-4010-988d-08528161f83e","Type":"ContainerStarted","Data":"58ba34bb01e162de55ecd02c9d264726d4b9b77916c79e02dec7a0b9b0fd18e1"} Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.889013 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44","Type":"ContainerStarted","Data":"413713bf427b64dc1c0bd63dc4d46f1b41e581e35f87299335685c5080a6ef1b"} Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.889044 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44","Type":"ContainerStarted","Data":"df03044b8dd36567c91f27ced0e69e67e6f1dd24e6def12fed3e5525bb8a826b"} Oct 06 07:34:38 crc kubenswrapper[4769]: I1006 07:34:38.912990 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.9129728889999997 podStartE2EDuration="2.912972889s" podCreationTimestamp="2025-10-06 07:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:34:38.905291698 +0000 UTC m=+1075.429572905" watchObservedRunningTime="2025-10-06 07:34:38.912972889 +0000 UTC m=+1075.437254036" Oct 06 07:34:39 crc kubenswrapper[4769]: I1006 07:34:39.176647 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 06 07:34:39 crc kubenswrapper[4769]: I1006 07:34:39.897734 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8d4d351d-b1dc-4010-988d-08528161f83e","Type":"ContainerStarted","Data":"0989cc76c76aa106c9a5b6be82e0f7d7d48c0769f5f7fbc891c7ed9eb81f4d7b"} Oct 06 07:34:39 crc kubenswrapper[4769]: I1006 07:34:39.914622 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.914602496 podStartE2EDuration="2.914602496s" podCreationTimestamp="2025-10-06 07:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:34:39.913144087 +0000 UTC m=+1076.437425244" watchObservedRunningTime="2025-10-06 07:34:39.914602496 +0000 UTC m=+1076.438883643" Oct 06 07:34:40 crc kubenswrapper[4769]: I1006 07:34:40.177207 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 07:34:40 crc kubenswrapper[4769]: I1006 07:34:40.177245 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 07:34:40 crc kubenswrapper[4769]: E1006 07:34:40.522772 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c81ea12_eef8_4593_9181_078ce593c881.slice/crio-conmon-2f9b712ea83532a44315216eb69cb48f479b8c58f920d04357e5d791746cbf6d.scope\": RecentStats: unable to find data in memory cache]" Oct 06 07:34:43 crc kubenswrapper[4769]: I1006 07:34:43.254014 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 06 07:34:45 crc kubenswrapper[4769]: I1006 07:34:45.170200 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 07:34:45 crc kubenswrapper[4769]: I1006 07:34:45.170283 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 07:34:46 crc kubenswrapper[4769]: I1006 07:34:46.191826 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 07:34:46 crc kubenswrapper[4769]: I1006 07:34:46.193054 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 07:34:47 crc kubenswrapper[4769]: I1006 07:34:47.237723 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 07:34:47 crc kubenswrapper[4769]: I1006 07:34:47.238280 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 07:34:48 crc kubenswrapper[4769]: I1006 07:34:48.254085 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 06 07:34:48 crc kubenswrapper[4769]: I1006 07:34:48.288250 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 06 07:34:48 crc kubenswrapper[4769]: I1006 07:34:48.319609 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 07:34:48 crc kubenswrapper[4769]: I1006 07:34:48.319614 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 07:34:49 crc kubenswrapper[4769]: I1006 07:34:49.029826 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 06 07:34:52 crc kubenswrapper[4769]: I1006 07:34:52.246485 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:34:52 crc kubenswrapper[4769]: I1006 07:34:52.246594 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:34:55 crc kubenswrapper[4769]: I1006 07:34:55.178359 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 07:34:55 crc kubenswrapper[4769]: I1006 07:34:55.179745 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 07:34:55 crc kubenswrapper[4769]: I1006 07:34:55.186170 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 07:34:55 crc kubenswrapper[4769]: I1006 07:34:55.187003 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.075376 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.084539 4769 generic.go:334] "Generic (PLEG): container finished" podID="96f2521c-a4c7-4bec-80e3-6126d1a36579" containerID="b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14" exitCode=137 Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.084625 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"96f2521c-a4c7-4bec-80e3-6126d1a36579","Type":"ContainerDied","Data":"b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14"} Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.084671 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"96f2521c-a4c7-4bec-80e3-6126d1a36579","Type":"ContainerDied","Data":"3e000db09731ec2fe2bc7bf8a9a14a481ac581bc84b4d52732d587b5d7fd5184"} Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.084689 4769 scope.go:117] "RemoveContainer" containerID="b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.084637 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.102691 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2kqg\" (UniqueName: \"kubernetes.io/projected/96f2521c-a4c7-4bec-80e3-6126d1a36579-kube-api-access-l2kqg\") pod \"96f2521c-a4c7-4bec-80e3-6126d1a36579\" (UID: \"96f2521c-a4c7-4bec-80e3-6126d1a36579\") " Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.102884 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f2521c-a4c7-4bec-80e3-6126d1a36579-config-data\") pod \"96f2521c-a4c7-4bec-80e3-6126d1a36579\" (UID: \"96f2521c-a4c7-4bec-80e3-6126d1a36579\") " Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.103510 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f2521c-a4c7-4bec-80e3-6126d1a36579-combined-ca-bundle\") pod \"96f2521c-a4c7-4bec-80e3-6126d1a36579\" (UID: \"96f2521c-a4c7-4bec-80e3-6126d1a36579\") " Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.109365 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f2521c-a4c7-4bec-80e3-6126d1a36579-kube-api-access-l2kqg" (OuterVolumeSpecName: "kube-api-access-l2kqg") pod "96f2521c-a4c7-4bec-80e3-6126d1a36579" (UID: "96f2521c-a4c7-4bec-80e3-6126d1a36579"). InnerVolumeSpecName "kube-api-access-l2kqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.124873 4769 scope.go:117] "RemoveContainer" containerID="b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14" Oct 06 07:34:56 crc kubenswrapper[4769]: E1006 07:34:56.125622 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14\": container with ID starting with b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14 not found: ID does not exist" containerID="b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.125812 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14"} err="failed to get container status \"b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14\": rpc error: code = NotFound desc = could not find container \"b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14\": container with ID starting with b5a971817dc7f3e5cdb372632b950ee4b2c7bd2560c96f27144bd0e082300d14 not found: ID does not exist" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.132802 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f2521c-a4c7-4bec-80e3-6126d1a36579-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96f2521c-a4c7-4bec-80e3-6126d1a36579" (UID: "96f2521c-a4c7-4bec-80e3-6126d1a36579"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.138621 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96f2521c-a4c7-4bec-80e3-6126d1a36579-config-data" (OuterVolumeSpecName: "config-data") pod "96f2521c-a4c7-4bec-80e3-6126d1a36579" (UID: "96f2521c-a4c7-4bec-80e3-6126d1a36579"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.207390 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96f2521c-a4c7-4bec-80e3-6126d1a36579-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.207429 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2kqg\" (UniqueName: \"kubernetes.io/projected/96f2521c-a4c7-4bec-80e3-6126d1a36579-kube-api-access-l2kqg\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.207462 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96f2521c-a4c7-4bec-80e3-6126d1a36579-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.406178 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.420275 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.429191 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 07:34:56 crc kubenswrapper[4769]: E1006 07:34:56.429702 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f2521c-a4c7-4bec-80e3-6126d1a36579" containerName="nova-cell1-novncproxy-novncproxy" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.429718 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f2521c-a4c7-4bec-80e3-6126d1a36579" containerName="nova-cell1-novncproxy-novncproxy" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.429878 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f2521c-a4c7-4bec-80e3-6126d1a36579" containerName="nova-cell1-novncproxy-novncproxy" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.430410 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.437376 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.437665 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.437826 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.445287 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.512646 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5554fc29-0173-4e76-aa22-355c4f3725d2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.512722 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5554fc29-0173-4e76-aa22-355c4f3725d2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.512761 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frkzn\" (UniqueName: \"kubernetes.io/projected/5554fc29-0173-4e76-aa22-355c4f3725d2-kube-api-access-frkzn\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.512930 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5554fc29-0173-4e76-aa22-355c4f3725d2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.513000 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5554fc29-0173-4e76-aa22-355c4f3725d2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.615232 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5554fc29-0173-4e76-aa22-355c4f3725d2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.615301 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5554fc29-0173-4e76-aa22-355c4f3725d2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.615495 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5554fc29-0173-4e76-aa22-355c4f3725d2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.615532 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5554fc29-0173-4e76-aa22-355c4f3725d2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.615558 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frkzn\" (UniqueName: \"kubernetes.io/projected/5554fc29-0173-4e76-aa22-355c4f3725d2-kube-api-access-frkzn\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.619403 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5554fc29-0173-4e76-aa22-355c4f3725d2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.619757 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5554fc29-0173-4e76-aa22-355c4f3725d2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.619776 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5554fc29-0173-4e76-aa22-355c4f3725d2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.622115 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5554fc29-0173-4e76-aa22-355c4f3725d2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.630986 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frkzn\" (UniqueName: \"kubernetes.io/projected/5554fc29-0173-4e76-aa22-355c4f3725d2-kube-api-access-frkzn\") pod \"nova-cell1-novncproxy-0\" (UID: \"5554fc29-0173-4e76-aa22-355c4f3725d2\") " pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:56 crc kubenswrapper[4769]: I1006 07:34:56.759162 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:34:57 crc kubenswrapper[4769]: I1006 07:34:57.238130 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 06 07:34:57 crc kubenswrapper[4769]: I1006 07:34:57.243558 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 07:34:57 crc kubenswrapper[4769]: I1006 07:34:57.244020 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 07:34:57 crc kubenswrapper[4769]: I1006 07:34:57.249059 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 07:34:57 crc kubenswrapper[4769]: W1006 07:34:57.252208 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5554fc29_0173_4e76_aa22_355c4f3725d2.slice/crio-519a91eace4ee2eeb37b74be83bdb18d171b6e56a7f1ec67e332a54c0a3a1285 WatchSource:0}: Error finding container 519a91eace4ee2eeb37b74be83bdb18d171b6e56a7f1ec67e332a54c0a3a1285: Status 404 returned error can't find the container with id 519a91eace4ee2eeb37b74be83bdb18d171b6e56a7f1ec67e332a54c0a3a1285 Oct 06 07:34:57 crc kubenswrapper[4769]: I1006 07:34:57.257545 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.113158 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5554fc29-0173-4e76-aa22-355c4f3725d2","Type":"ContainerStarted","Data":"7f8fcb9ab27bea36a86bcc0926037a41e1fcefedaa12ba759106c7a4fa58e6c7"} Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.113807 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5554fc29-0173-4e76-aa22-355c4f3725d2","Type":"ContainerStarted","Data":"519a91eace4ee2eeb37b74be83bdb18d171b6e56a7f1ec67e332a54c0a3a1285"} Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.113832 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.124279 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.146933 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.1469119819999998 podStartE2EDuration="2.146911982s" podCreationTimestamp="2025-10-06 07:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:34:58.136484897 +0000 UTC m=+1094.660766084" watchObservedRunningTime="2025-10-06 07:34:58.146911982 +0000 UTC m=+1094.671193129" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.203699 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96f2521c-a4c7-4bec-80e3-6126d1a36579" path="/var/lib/kubelet/pods/96f2521c-a4c7-4bec-80e3-6126d1a36579/volumes" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.274128 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7df55567c-2xgsr"] Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.279918 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.322288 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7df55567c-2xgsr"] Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.395478 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-dns-swift-storage-0\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.395571 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-dns-svc\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.395591 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-ovsdbserver-sb\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.395623 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-ovsdbserver-nb\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.395704 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b76h7\" (UniqueName: \"kubernetes.io/projected/3546e562-3dc5-4b9f-821c-46c3d530a1c3-kube-api-access-b76h7\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.395730 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-config\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.497301 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b76h7\" (UniqueName: \"kubernetes.io/projected/3546e562-3dc5-4b9f-821c-46c3d530a1c3-kube-api-access-b76h7\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.497354 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-config\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.497400 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-dns-swift-storage-0\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.497459 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-dns-svc\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.497476 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-ovsdbserver-sb\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.497508 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-ovsdbserver-nb\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.498303 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-config\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.498359 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-ovsdbserver-nb\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.498455 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-dns-svc\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.498482 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-ovsdbserver-sb\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.498889 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-dns-swift-storage-0\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.515088 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b76h7\" (UniqueName: \"kubernetes.io/projected/3546e562-3dc5-4b9f-821c-46c3d530a1c3-kube-api-access-b76h7\") pod \"dnsmasq-dns-7df55567c-2xgsr\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:58 crc kubenswrapper[4769]: I1006 07:34:58.617185 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:34:59 crc kubenswrapper[4769]: I1006 07:34:59.102127 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7df55567c-2xgsr"] Oct 06 07:34:59 crc kubenswrapper[4769]: I1006 07:34:59.124977 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" event={"ID":"3546e562-3dc5-4b9f-821c-46c3d530a1c3","Type":"ContainerStarted","Data":"1235760395ea9c09c32c5012675d28ad104942471b64dc7445018df104cfa38c"} Oct 06 07:35:00 crc kubenswrapper[4769]: I1006 07:35:00.090983 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 06 07:35:00 crc kubenswrapper[4769]: I1006 07:35:00.118283 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:35:00 crc kubenswrapper[4769]: I1006 07:35:00.134221 4769 generic.go:334] "Generic (PLEG): container finished" podID="3546e562-3dc5-4b9f-821c-46c3d530a1c3" containerID="cf4dfec66ceae01ca17b72dfac9b58e81c4d2c2eebc1dd95ddfc0a51e621a00b" exitCode=0 Oct 06 07:35:00 crc kubenswrapper[4769]: I1006 07:35:00.134299 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" event={"ID":"3546e562-3dc5-4b9f-821c-46c3d530a1c3","Type":"ContainerDied","Data":"cf4dfec66ceae01ca17b72dfac9b58e81c4d2c2eebc1dd95ddfc0a51e621a00b"} Oct 06 07:35:00 crc kubenswrapper[4769]: I1006 07:35:00.134609 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="sg-core" containerID="cri-o://6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3" gracePeriod=30 Oct 06 07:35:00 crc kubenswrapper[4769]: I1006 07:35:00.134647 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="ceilometer-notification-agent" containerID="cri-o://87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda" gracePeriod=30 Oct 06 07:35:00 crc kubenswrapper[4769]: I1006 07:35:00.134610 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="proxy-httpd" containerID="cri-o://0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3" gracePeriod=30 Oct 06 07:35:00 crc kubenswrapper[4769]: I1006 07:35:00.134775 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="ceilometer-central-agent" containerID="cri-o://1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f" gracePeriod=30 Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.039309 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.147493 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" event={"ID":"3546e562-3dc5-4b9f-821c-46c3d530a1c3","Type":"ContainerStarted","Data":"2664ece92b0ae82a388545fd80520480a9905668f2be84a1c8fa4446e7e887e2"} Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.147604 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.152544 4769 generic.go:334] "Generic (PLEG): container finished" podID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerID="0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3" exitCode=0 Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.152584 4769 generic.go:334] "Generic (PLEG): container finished" podID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerID="6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3" exitCode=2 Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.152598 4769 generic.go:334] "Generic (PLEG): container finished" podID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerID="1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f" exitCode=0 Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.152791 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" containerName="nova-api-log" containerID="cri-o://df03044b8dd36567c91f27ced0e69e67e6f1dd24e6def12fed3e5525bb8a826b" gracePeriod=30 Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.153076 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76a29e45-4076-42dd-801b-2fc41d73ea04","Type":"ContainerDied","Data":"0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3"} Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.153114 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76a29e45-4076-42dd-801b-2fc41d73ea04","Type":"ContainerDied","Data":"6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3"} Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.153126 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76a29e45-4076-42dd-801b-2fc41d73ea04","Type":"ContainerDied","Data":"1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f"} Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.153186 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" containerName="nova-api-api" containerID="cri-o://413713bf427b64dc1c0bd63dc4d46f1b41e581e35f87299335685c5080a6ef1b" gracePeriod=30 Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.182384 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" podStartSLOduration=3.182366593 podStartE2EDuration="3.182366593s" podCreationTimestamp="2025-10-06 07:34:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:35:01.180827412 +0000 UTC m=+1097.705108589" watchObservedRunningTime="2025-10-06 07:35:01.182366593 +0000 UTC m=+1097.706647740" Oct 06 07:35:01 crc kubenswrapper[4769]: I1006 07:35:01.759435 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.164245 4769 generic.go:334] "Generic (PLEG): container finished" podID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" containerID="413713bf427b64dc1c0bd63dc4d46f1b41e581e35f87299335685c5080a6ef1b" exitCode=0 Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.164555 4769 generic.go:334] "Generic (PLEG): container finished" podID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" containerID="df03044b8dd36567c91f27ced0e69e67e6f1dd24e6def12fed3e5525bb8a826b" exitCode=143 Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.165338 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44","Type":"ContainerDied","Data":"413713bf427b64dc1c0bd63dc4d46f1b41e581e35f87299335685c5080a6ef1b"} Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.165367 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44","Type":"ContainerDied","Data":"df03044b8dd36567c91f27ced0e69e67e6f1dd24e6def12fed3e5525bb8a826b"} Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.397811 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.473754 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tndm9\" (UniqueName: \"kubernetes.io/projected/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-kube-api-access-tndm9\") pod \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.473902 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-logs\") pod \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.474006 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-config-data\") pod \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.474106 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-combined-ca-bundle\") pod \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\" (UID: \"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44\") " Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.474842 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-logs" (OuterVolumeSpecName: "logs") pod "7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" (UID: "7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.494649 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-kube-api-access-tndm9" (OuterVolumeSpecName: "kube-api-access-tndm9") pod "7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" (UID: "7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44"). InnerVolumeSpecName "kube-api-access-tndm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.512588 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-config-data" (OuterVolumeSpecName: "config-data") pod "7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" (UID: "7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.518672 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" (UID: "7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.575672 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tndm9\" (UniqueName: \"kubernetes.io/projected/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-kube-api-access-tndm9\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.575704 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.575714 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.575722 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.768161 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.881186 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-sg-core-conf-yaml\") pod \"76a29e45-4076-42dd-801b-2fc41d73ea04\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.881255 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-ceilometer-tls-certs\") pod \"76a29e45-4076-42dd-801b-2fc41d73ea04\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.881294 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-scripts\") pod \"76a29e45-4076-42dd-801b-2fc41d73ea04\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.881342 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scfhs\" (UniqueName: \"kubernetes.io/projected/76a29e45-4076-42dd-801b-2fc41d73ea04-kube-api-access-scfhs\") pod \"76a29e45-4076-42dd-801b-2fc41d73ea04\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.881508 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-config-data\") pod \"76a29e45-4076-42dd-801b-2fc41d73ea04\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.881655 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-combined-ca-bundle\") pod \"76a29e45-4076-42dd-801b-2fc41d73ea04\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.881737 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76a29e45-4076-42dd-801b-2fc41d73ea04-run-httpd\") pod \"76a29e45-4076-42dd-801b-2fc41d73ea04\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.881791 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76a29e45-4076-42dd-801b-2fc41d73ea04-log-httpd\") pod \"76a29e45-4076-42dd-801b-2fc41d73ea04\" (UID: \"76a29e45-4076-42dd-801b-2fc41d73ea04\") " Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.883048 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76a29e45-4076-42dd-801b-2fc41d73ea04-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "76a29e45-4076-42dd-801b-2fc41d73ea04" (UID: "76a29e45-4076-42dd-801b-2fc41d73ea04"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.883498 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76a29e45-4076-42dd-801b-2fc41d73ea04-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "76a29e45-4076-42dd-801b-2fc41d73ea04" (UID: "76a29e45-4076-42dd-801b-2fc41d73ea04"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.885247 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a29e45-4076-42dd-801b-2fc41d73ea04-kube-api-access-scfhs" (OuterVolumeSpecName: "kube-api-access-scfhs") pod "76a29e45-4076-42dd-801b-2fc41d73ea04" (UID: "76a29e45-4076-42dd-801b-2fc41d73ea04"). InnerVolumeSpecName "kube-api-access-scfhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.888561 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-scripts" (OuterVolumeSpecName: "scripts") pod "76a29e45-4076-42dd-801b-2fc41d73ea04" (UID: "76a29e45-4076-42dd-801b-2fc41d73ea04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.911370 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "76a29e45-4076-42dd-801b-2fc41d73ea04" (UID: "76a29e45-4076-42dd-801b-2fc41d73ea04"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.933532 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "76a29e45-4076-42dd-801b-2fc41d73ea04" (UID: "76a29e45-4076-42dd-801b-2fc41d73ea04"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.952544 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76a29e45-4076-42dd-801b-2fc41d73ea04" (UID: "76a29e45-4076-42dd-801b-2fc41d73ea04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.984229 4769 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.984258 4769 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.984271 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.984280 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scfhs\" (UniqueName: \"kubernetes.io/projected/76a29e45-4076-42dd-801b-2fc41d73ea04-kube-api-access-scfhs\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.984289 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.984298 4769 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76a29e45-4076-42dd-801b-2fc41d73ea04-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.984305 4769 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76a29e45-4076-42dd-801b-2fc41d73ea04-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:02 crc kubenswrapper[4769]: I1006 07:35:02.986518 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-config-data" (OuterVolumeSpecName: "config-data") pod "76a29e45-4076-42dd-801b-2fc41d73ea04" (UID: "76a29e45-4076-42dd-801b-2fc41d73ea04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.086365 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a29e45-4076-42dd-801b-2fc41d73ea04-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.177844 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44","Type":"ContainerDied","Data":"c0cbff6a411ec6a56cfce1ac9573e974ccc8f56c8a4e54488ffd23e8e63e07a7"} Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.177923 4769 scope.go:117] "RemoveContainer" containerID="413713bf427b64dc1c0bd63dc4d46f1b41e581e35f87299335685c5080a6ef1b" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.178095 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.183968 4769 generic.go:334] "Generic (PLEG): container finished" podID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerID="87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda" exitCode=0 Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.184035 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76a29e45-4076-42dd-801b-2fc41d73ea04","Type":"ContainerDied","Data":"87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda"} Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.184066 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"76a29e45-4076-42dd-801b-2fc41d73ea04","Type":"ContainerDied","Data":"98420e17e8f50b07a0738d26a681284447c8bc1e95eb3b7f7fc11f1d7daaaff9"} Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.184036 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.211968 4769 scope.go:117] "RemoveContainer" containerID="df03044b8dd36567c91f27ced0e69e67e6f1dd24e6def12fed3e5525bb8a826b" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.246481 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.249522 4769 scope.go:117] "RemoveContainer" containerID="0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.266313 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.277887 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.289015 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:35:03 crc kubenswrapper[4769]: E1006 07:35:03.289660 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" containerName="nova-api-api" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.289694 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" containerName="nova-api-api" Oct 06 07:35:03 crc kubenswrapper[4769]: E1006 07:35:03.289756 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="sg-core" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.289770 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="sg-core" Oct 06 07:35:03 crc kubenswrapper[4769]: E1006 07:35:03.289784 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" containerName="nova-api-log" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.289795 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" containerName="nova-api-log" Oct 06 07:35:03 crc kubenswrapper[4769]: E1006 07:35:03.289817 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="ceilometer-central-agent" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.289831 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="ceilometer-central-agent" Oct 06 07:35:03 crc kubenswrapper[4769]: E1006 07:35:03.289865 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="ceilometer-notification-agent" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.289879 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="ceilometer-notification-agent" Oct 06 07:35:03 crc kubenswrapper[4769]: E1006 07:35:03.289896 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="proxy-httpd" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.289908 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="proxy-httpd" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.290210 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="ceilometer-notification-agent" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.290248 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="ceilometer-central-agent" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.290267 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="proxy-httpd" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.290291 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" containerName="nova-api-api" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.290312 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" containerName="nova-api-log" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.290338 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" containerName="sg-core" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.293108 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.301275 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.301381 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.301508 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.311622 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.318464 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.318669 4769 scope.go:117] "RemoveContainer" containerID="6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.324721 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.338185 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.338311 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.340177 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.340920 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.341245 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.385437 4769 scope.go:117] "RemoveContainer" containerID="87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.391993 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392034 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3be702-b284-41a7-8e76-a09139eed2b4-run-httpd\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392059 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392077 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-config-data\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392131 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7ffv\" (UniqueName: \"kubernetes.io/projected/5b3be702-b284-41a7-8e76-a09139eed2b4-kube-api-access-x7ffv\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392153 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b040638-542d-4dcb-bf8a-77993cb3a76f-logs\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392172 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3be702-b284-41a7-8e76-a09139eed2b4-log-httpd\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392257 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-config-data\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392278 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392329 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-public-tls-certs\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392354 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392369 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392385 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsc42\" (UniqueName: \"kubernetes.io/projected/0b040638-542d-4dcb-bf8a-77993cb3a76f-kube-api-access-vsc42\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.392415 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-scripts\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.402768 4769 scope.go:117] "RemoveContainer" containerID="1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.420857 4769 scope.go:117] "RemoveContainer" containerID="0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3" Oct 06 07:35:03 crc kubenswrapper[4769]: E1006 07:35:03.421362 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3\": container with ID starting with 0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3 not found: ID does not exist" containerID="0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.421391 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3"} err="failed to get container status \"0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3\": rpc error: code = NotFound desc = could not find container \"0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3\": container with ID starting with 0daa6077506884d646f976c6d34544b94306428c6b4156d1bb7bc6bb055e53f3 not found: ID does not exist" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.421414 4769 scope.go:117] "RemoveContainer" containerID="6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3" Oct 06 07:35:03 crc kubenswrapper[4769]: E1006 07:35:03.421683 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3\": container with ID starting with 6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3 not found: ID does not exist" containerID="6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.421735 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3"} err="failed to get container status \"6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3\": rpc error: code = NotFound desc = could not find container \"6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3\": container with ID starting with 6578c7e67ab47b85a1b7e18fb31fae4ba1c04649fe5b123e3b3f6466ad2cc4a3 not found: ID does not exist" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.421765 4769 scope.go:117] "RemoveContainer" containerID="87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda" Oct 06 07:35:03 crc kubenswrapper[4769]: E1006 07:35:03.422291 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda\": container with ID starting with 87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda not found: ID does not exist" containerID="87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.422325 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda"} err="failed to get container status \"87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda\": rpc error: code = NotFound desc = could not find container \"87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda\": container with ID starting with 87b8d50103a724bcd498b13d11dae47a0d64c441b48c95bd4def0c16ad135cda not found: ID does not exist" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.422347 4769 scope.go:117] "RemoveContainer" containerID="1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f" Oct 06 07:35:03 crc kubenswrapper[4769]: E1006 07:35:03.422866 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f\": container with ID starting with 1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f not found: ID does not exist" containerID="1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.422889 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f"} err="failed to get container status \"1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f\": rpc error: code = NotFound desc = could not find container \"1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f\": container with ID starting with 1d5a7bb6b3c3580d8cbbe4b2bd18ae41ab2d87dedd94f5da58ebdcfddbc58b1f not found: ID does not exist" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.494674 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-public-tls-certs\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.494742 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.494767 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.494788 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsc42\" (UniqueName: \"kubernetes.io/projected/0b040638-542d-4dcb-bf8a-77993cb3a76f-kube-api-access-vsc42\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.494835 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-scripts\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.494919 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.494953 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3be702-b284-41a7-8e76-a09139eed2b4-run-httpd\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.494977 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.495007 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-config-data\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.495032 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7ffv\" (UniqueName: \"kubernetes.io/projected/5b3be702-b284-41a7-8e76-a09139eed2b4-kube-api-access-x7ffv\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.495054 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b040638-542d-4dcb-bf8a-77993cb3a76f-logs\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.495075 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3be702-b284-41a7-8e76-a09139eed2b4-log-httpd\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.495120 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-config-data\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.495145 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.496082 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b040638-542d-4dcb-bf8a-77993cb3a76f-logs\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.496077 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3be702-b284-41a7-8e76-a09139eed2b4-log-httpd\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.496399 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b3be702-b284-41a7-8e76-a09139eed2b4-run-httpd\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.499762 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.500315 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-config-data\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.501836 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-public-tls-certs\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.503407 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.503501 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.503521 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-scripts\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.504893 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.507904 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.508675 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b3be702-b284-41a7-8e76-a09139eed2b4-config-data\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.511785 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7ffv\" (UniqueName: \"kubernetes.io/projected/5b3be702-b284-41a7-8e76-a09139eed2b4-kube-api-access-x7ffv\") pod \"ceilometer-0\" (UID: \"5b3be702-b284-41a7-8e76-a09139eed2b4\") " pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.523031 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsc42\" (UniqueName: \"kubernetes.io/projected/0b040638-542d-4dcb-bf8a-77993cb3a76f-kube-api-access-vsc42\") pod \"nova-api-0\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " pod="openstack/nova-api-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.684171 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 06 07:35:03 crc kubenswrapper[4769]: I1006 07:35:03.690395 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:35:04 crc kubenswrapper[4769]: I1006 07:35:04.154395 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:35:04 crc kubenswrapper[4769]: I1006 07:35:04.208284 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76a29e45-4076-42dd-801b-2fc41d73ea04" path="/var/lib/kubelet/pods/76a29e45-4076-42dd-801b-2fc41d73ea04/volumes" Oct 06 07:35:04 crc kubenswrapper[4769]: I1006 07:35:04.213120 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44" path="/var/lib/kubelet/pods/7d3c8d22-6c66-4cc6-b1a9-7528e8fbbe44/volumes" Oct 06 07:35:04 crc kubenswrapper[4769]: I1006 07:35:04.222207 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b040638-542d-4dcb-bf8a-77993cb3a76f","Type":"ContainerStarted","Data":"13678422ad7de5e37cabeceb14e440f0fbb4d93d82d298a5ef6aecff5dc770c8"} Oct 06 07:35:04 crc kubenswrapper[4769]: W1006 07:35:04.222736 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b3be702_b284_41a7_8e76_a09139eed2b4.slice/crio-6d905d9aa80ea3185152aa82b706e2b02edf832f7a6c917719c5543938521e65 WatchSource:0}: Error finding container 6d905d9aa80ea3185152aa82b706e2b02edf832f7a6c917719c5543938521e65: Status 404 returned error can't find the container with id 6d905d9aa80ea3185152aa82b706e2b02edf832f7a6c917719c5543938521e65 Oct 06 07:35:04 crc kubenswrapper[4769]: I1006 07:35:04.234148 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 06 07:35:05 crc kubenswrapper[4769]: I1006 07:35:05.235810 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3be702-b284-41a7-8e76-a09139eed2b4","Type":"ContainerStarted","Data":"f03ebfd93a9383676c77797186b7e1433a3b06105f783aaedfe46e16d6083bae"} Oct 06 07:35:05 crc kubenswrapper[4769]: I1006 07:35:05.236523 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3be702-b284-41a7-8e76-a09139eed2b4","Type":"ContainerStarted","Data":"2c3bf4aee7b577e267852b64f121949d30fa88959500fc9a1593967121c747fd"} Oct 06 07:35:05 crc kubenswrapper[4769]: I1006 07:35:05.236546 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3be702-b284-41a7-8e76-a09139eed2b4","Type":"ContainerStarted","Data":"6d905d9aa80ea3185152aa82b706e2b02edf832f7a6c917719c5543938521e65"} Oct 06 07:35:05 crc kubenswrapper[4769]: I1006 07:35:05.239335 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b040638-542d-4dcb-bf8a-77993cb3a76f","Type":"ContainerStarted","Data":"c2d5507a994dcc33e04325a40e1557f51f62284afa629f8e0f25c4bea142352a"} Oct 06 07:35:05 crc kubenswrapper[4769]: I1006 07:35:05.239393 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b040638-542d-4dcb-bf8a-77993cb3a76f","Type":"ContainerStarted","Data":"19071b093a1fe318a89b6c76d343124fe042e186c0b223b49896fcd34a6a68b8"} Oct 06 07:35:05 crc kubenswrapper[4769]: I1006 07:35:05.281198 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.281177282 podStartE2EDuration="2.281177282s" podCreationTimestamp="2025-10-06 07:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:35:05.266531822 +0000 UTC m=+1101.790812989" watchObservedRunningTime="2025-10-06 07:35:05.281177282 +0000 UTC m=+1101.805458439" Oct 06 07:35:06 crc kubenswrapper[4769]: I1006 07:35:06.252193 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3be702-b284-41a7-8e76-a09139eed2b4","Type":"ContainerStarted","Data":"eb5e458b8064b6489c622cbf50b357843da98a1826354cbe5695d6dd364724e2"} Oct 06 07:35:06 crc kubenswrapper[4769]: I1006 07:35:06.760324 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:35:06 crc kubenswrapper[4769]: I1006 07:35:06.786979 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.287484 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b3be702-b284-41a7-8e76-a09139eed2b4","Type":"ContainerStarted","Data":"e049f3250c9ffdfb00c2578cd8713db5eb0f4d2ab419e3494b6d324842fa5a12"} Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.288337 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.323440 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.767358227 podStartE2EDuration="4.323401545s" podCreationTimestamp="2025-10-06 07:35:03 +0000 UTC" firstStartedPulling="2025-10-06 07:35:04.224616291 +0000 UTC m=+1100.748897448" lastFinishedPulling="2025-10-06 07:35:06.780659609 +0000 UTC m=+1103.304940766" observedRunningTime="2025-10-06 07:35:07.318000137 +0000 UTC m=+1103.842281324" watchObservedRunningTime="2025-10-06 07:35:07.323401545 +0000 UTC m=+1103.847682692" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.324641 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.471318 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-6d8pt"] Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.472624 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.474985 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.475058 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.494552 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6d8pt"] Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.588247 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6d8pt\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.588286 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-scripts\") pod \"nova-cell1-cell-mapping-6d8pt\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.588668 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-config-data\") pod \"nova-cell1-cell-mapping-6d8pt\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.588790 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jq8w\" (UniqueName: \"kubernetes.io/projected/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-kube-api-access-2jq8w\") pod \"nova-cell1-cell-mapping-6d8pt\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.690787 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jq8w\" (UniqueName: \"kubernetes.io/projected/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-kube-api-access-2jq8w\") pod \"nova-cell1-cell-mapping-6d8pt\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.690889 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6d8pt\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.690912 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-scripts\") pod \"nova-cell1-cell-mapping-6d8pt\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.691014 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-config-data\") pod \"nova-cell1-cell-mapping-6d8pt\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.707121 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-config-data\") pod \"nova-cell1-cell-mapping-6d8pt\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.707460 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-6d8pt\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.707540 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-scripts\") pod \"nova-cell1-cell-mapping-6d8pt\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.710200 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jq8w\" (UniqueName: \"kubernetes.io/projected/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-kube-api-access-2jq8w\") pod \"nova-cell1-cell-mapping-6d8pt\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:07 crc kubenswrapper[4769]: I1006 07:35:07.792137 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:08 crc kubenswrapper[4769]: I1006 07:35:08.290487 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-6d8pt"] Oct 06 07:35:08 crc kubenswrapper[4769]: I1006 07:35:08.619654 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:35:08 crc kubenswrapper[4769]: I1006 07:35:08.681099 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5798985649-7mtrk"] Oct 06 07:35:08 crc kubenswrapper[4769]: I1006 07:35:08.681386 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5798985649-7mtrk" podUID="d17e6089-8fb3-4ff8-b603-7a266693936a" containerName="dnsmasq-dns" containerID="cri-o://e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c" gracePeriod=10 Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.207008 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.304795 4769 generic.go:334] "Generic (PLEG): container finished" podID="d17e6089-8fb3-4ff8-b603-7a266693936a" containerID="e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c" exitCode=0 Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.305044 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5798985649-7mtrk" event={"ID":"d17e6089-8fb3-4ff8-b603-7a266693936a","Type":"ContainerDied","Data":"e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c"} Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.305070 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5798985649-7mtrk" event={"ID":"d17e6089-8fb3-4ff8-b603-7a266693936a","Type":"ContainerDied","Data":"028c6bdd8afb884f835d09000564305b2ea9be32dfbf63c3f1042e28cf07a2cb"} Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.305086 4769 scope.go:117] "RemoveContainer" containerID="e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.305186 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5798985649-7mtrk" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.309263 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6d8pt" event={"ID":"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0","Type":"ContainerStarted","Data":"ee217f44e19c9940ca1324e099b524225b6ba3ec4d9f80ed74cac3d9e25d8c02"} Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.309313 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6d8pt" event={"ID":"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0","Type":"ContainerStarted","Data":"b0a32998651f71f3ddbcf991d579532f4ffb8b70be690edd2a71131d55be78b6"} Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.319632 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-ovsdbserver-sb\") pod \"d17e6089-8fb3-4ff8-b603-7a266693936a\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.319708 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-dns-svc\") pod \"d17e6089-8fb3-4ff8-b603-7a266693936a\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.319818 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-config\") pod \"d17e6089-8fb3-4ff8-b603-7a266693936a\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.319844 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-ovsdbserver-nb\") pod \"d17e6089-8fb3-4ff8-b603-7a266693936a\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.319898 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6npb\" (UniqueName: \"kubernetes.io/projected/d17e6089-8fb3-4ff8-b603-7a266693936a-kube-api-access-r6npb\") pod \"d17e6089-8fb3-4ff8-b603-7a266693936a\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.319925 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-dns-swift-storage-0\") pod \"d17e6089-8fb3-4ff8-b603-7a266693936a\" (UID: \"d17e6089-8fb3-4ff8-b603-7a266693936a\") " Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.330542 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d17e6089-8fb3-4ff8-b603-7a266693936a-kube-api-access-r6npb" (OuterVolumeSpecName: "kube-api-access-r6npb") pod "d17e6089-8fb3-4ff8-b603-7a266693936a" (UID: "d17e6089-8fb3-4ff8-b603-7a266693936a"). InnerVolumeSpecName "kube-api-access-r6npb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.344944 4769 scope.go:117] "RemoveContainer" containerID="a0a229b246bf27bb22303e771e6e89003f4794450c630171fba97b3324df48f3" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.365021 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d17e6089-8fb3-4ff8-b603-7a266693936a" (UID: "d17e6089-8fb3-4ff8-b603-7a266693936a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.369802 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d17e6089-8fb3-4ff8-b603-7a266693936a" (UID: "d17e6089-8fb3-4ff8-b603-7a266693936a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.372080 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d17e6089-8fb3-4ff8-b603-7a266693936a" (UID: "d17e6089-8fb3-4ff8-b603-7a266693936a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.395032 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-config" (OuterVolumeSpecName: "config") pod "d17e6089-8fb3-4ff8-b603-7a266693936a" (UID: "d17e6089-8fb3-4ff8-b603-7a266693936a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.406464 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d17e6089-8fb3-4ff8-b603-7a266693936a" (UID: "d17e6089-8fb3-4ff8-b603-7a266693936a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.422882 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.422917 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.422928 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6npb\" (UniqueName: \"kubernetes.io/projected/d17e6089-8fb3-4ff8-b603-7a266693936a-kube-api-access-r6npb\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.422936 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.422944 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.422952 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d17e6089-8fb3-4ff8-b603-7a266693936a-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.449004 4769 scope.go:117] "RemoveContainer" containerID="e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c" Oct 06 07:35:09 crc kubenswrapper[4769]: E1006 07:35:09.449507 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c\": container with ID starting with e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c not found: ID does not exist" containerID="e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.449552 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c"} err="failed to get container status \"e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c\": rpc error: code = NotFound desc = could not find container \"e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c\": container with ID starting with e18ec90af0ae51e5bd841023eb6b567fa72544e961472e46ba8a45e1b40e928c not found: ID does not exist" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.449581 4769 scope.go:117] "RemoveContainer" containerID="a0a229b246bf27bb22303e771e6e89003f4794450c630171fba97b3324df48f3" Oct 06 07:35:09 crc kubenswrapper[4769]: E1006 07:35:09.449885 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a229b246bf27bb22303e771e6e89003f4794450c630171fba97b3324df48f3\": container with ID starting with a0a229b246bf27bb22303e771e6e89003f4794450c630171fba97b3324df48f3 not found: ID does not exist" containerID="a0a229b246bf27bb22303e771e6e89003f4794450c630171fba97b3324df48f3" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.449920 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a229b246bf27bb22303e771e6e89003f4794450c630171fba97b3324df48f3"} err="failed to get container status \"a0a229b246bf27bb22303e771e6e89003f4794450c630171fba97b3324df48f3\": rpc error: code = NotFound desc = could not find container \"a0a229b246bf27bb22303e771e6e89003f4794450c630171fba97b3324df48f3\": container with ID starting with a0a229b246bf27bb22303e771e6e89003f4794450c630171fba97b3324df48f3 not found: ID does not exist" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.636978 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-6d8pt" podStartSLOduration=2.63695931 podStartE2EDuration="2.63695931s" podCreationTimestamp="2025-10-06 07:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:35:09.325931032 +0000 UTC m=+1105.850212199" watchObservedRunningTime="2025-10-06 07:35:09.63695931 +0000 UTC m=+1106.161240467" Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.641568 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5798985649-7mtrk"] Oct 06 07:35:09 crc kubenswrapper[4769]: I1006 07:35:09.651137 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5798985649-7mtrk"] Oct 06 07:35:10 crc kubenswrapper[4769]: I1006 07:35:10.175542 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d17e6089-8fb3-4ff8-b603-7a266693936a" path="/var/lib/kubelet/pods/d17e6089-8fb3-4ff8-b603-7a266693936a/volumes" Oct 06 07:35:13 crc kubenswrapper[4769]: I1006 07:35:13.691658 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 07:35:13 crc kubenswrapper[4769]: I1006 07:35:13.692611 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 07:35:14 crc kubenswrapper[4769]: I1006 07:35:14.380310 4769 generic.go:334] "Generic (PLEG): container finished" podID="4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0" containerID="ee217f44e19c9940ca1324e099b524225b6ba3ec4d9f80ed74cac3d9e25d8c02" exitCode=0 Oct 06 07:35:14 crc kubenswrapper[4769]: I1006 07:35:14.380386 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6d8pt" event={"ID":"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0","Type":"ContainerDied","Data":"ee217f44e19c9940ca1324e099b524225b6ba3ec4d9f80ed74cac3d9e25d8c02"} Oct 06 07:35:14 crc kubenswrapper[4769]: I1006 07:35:14.704197 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0b040638-542d-4dcb-bf8a-77993cb3a76f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 06 07:35:14 crc kubenswrapper[4769]: I1006 07:35:14.704203 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0b040638-542d-4dcb-bf8a-77993cb3a76f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.202:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.789561 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.859825 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-scripts\") pod \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.859931 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jq8w\" (UniqueName: \"kubernetes.io/projected/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-kube-api-access-2jq8w\") pod \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.859996 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-config-data\") pod \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.860054 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-combined-ca-bundle\") pod \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\" (UID: \"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0\") " Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.867478 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-kube-api-access-2jq8w" (OuterVolumeSpecName: "kube-api-access-2jq8w") pod "4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0" (UID: "4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0"). InnerVolumeSpecName "kube-api-access-2jq8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.869668 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-scripts" (OuterVolumeSpecName: "scripts") pod "4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0" (UID: "4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.897726 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-config-data" (OuterVolumeSpecName: "config-data") pod "4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0" (UID: "4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.917268 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0" (UID: "4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.962458 4769 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-scripts\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.962496 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jq8w\" (UniqueName: \"kubernetes.io/projected/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-kube-api-access-2jq8w\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.962507 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:15 crc kubenswrapper[4769]: I1006 07:35:15.962518 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:16 crc kubenswrapper[4769]: I1006 07:35:16.401981 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-6d8pt" event={"ID":"4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0","Type":"ContainerDied","Data":"b0a32998651f71f3ddbcf991d579532f4ffb8b70be690edd2a71131d55be78b6"} Oct 06 07:35:16 crc kubenswrapper[4769]: I1006 07:35:16.402373 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0a32998651f71f3ddbcf991d579532f4ffb8b70be690edd2a71131d55be78b6" Oct 06 07:35:16 crc kubenswrapper[4769]: I1006 07:35:16.402030 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-6d8pt" Oct 06 07:35:16 crc kubenswrapper[4769]: E1006 07:35:16.411107 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4de9aadc_6e57_40a8_8ecb_c2256ca9f3e0.slice\": RecentStats: unable to find data in memory cache]" Oct 06 07:35:16 crc kubenswrapper[4769]: I1006 07:35:16.586407 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:35:16 crc kubenswrapper[4769]: I1006 07:35:16.586667 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8d4d351d-b1dc-4010-988d-08528161f83e" containerName="nova-scheduler-scheduler" containerID="cri-o://0989cc76c76aa106c9a5b6be82e0f7d7d48c0769f5f7fbc891c7ed9eb81f4d7b" gracePeriod=30 Oct 06 07:35:16 crc kubenswrapper[4769]: I1006 07:35:16.603134 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:35:16 crc kubenswrapper[4769]: I1006 07:35:16.603438 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0b040638-542d-4dcb-bf8a-77993cb3a76f" containerName="nova-api-log" containerID="cri-o://19071b093a1fe318a89b6c76d343124fe042e186c0b223b49896fcd34a6a68b8" gracePeriod=30 Oct 06 07:35:16 crc kubenswrapper[4769]: I1006 07:35:16.603517 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0b040638-542d-4dcb-bf8a-77993cb3a76f" containerName="nova-api-api" containerID="cri-o://c2d5507a994dcc33e04325a40e1557f51f62284afa629f8e0f25c4bea142352a" gracePeriod=30 Oct 06 07:35:16 crc kubenswrapper[4769]: I1006 07:35:16.622722 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:35:16 crc kubenswrapper[4769]: I1006 07:35:16.622998 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" containerName="nova-metadata-log" containerID="cri-o://7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e" gracePeriod=30 Oct 06 07:35:16 crc kubenswrapper[4769]: I1006 07:35:16.623067 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" containerName="nova-metadata-metadata" containerID="cri-o://9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273" gracePeriod=30 Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.432930 4769 generic.go:334] "Generic (PLEG): container finished" podID="0b040638-542d-4dcb-bf8a-77993cb3a76f" containerID="19071b093a1fe318a89b6c76d343124fe042e186c0b223b49896fcd34a6a68b8" exitCode=143 Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.433011 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b040638-542d-4dcb-bf8a-77993cb3a76f","Type":"ContainerDied","Data":"19071b093a1fe318a89b6c76d343124fe042e186c0b223b49896fcd34a6a68b8"} Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.452820 4769 generic.go:334] "Generic (PLEG): container finished" podID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" containerID="7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e" exitCode=143 Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.452863 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e","Type":"ContainerDied","Data":"7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e"} Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.870919 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.903076 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbgcl\" (UniqueName: \"kubernetes.io/projected/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-kube-api-access-rbgcl\") pod \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.903230 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-combined-ca-bundle\") pod \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.903282 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-logs\") pod \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.903463 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-nova-metadata-tls-certs\") pod \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.903578 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-config-data\") pod \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\" (UID: \"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e\") " Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.908628 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-logs" (OuterVolumeSpecName: "logs") pod "5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" (UID: "5f924f6f-a352-49ec-bdd5-39bf0a65ed3e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.913384 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-kube-api-access-rbgcl" (OuterVolumeSpecName: "kube-api-access-rbgcl") pod "5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" (UID: "5f924f6f-a352-49ec-bdd5-39bf0a65ed3e"). InnerVolumeSpecName "kube-api-access-rbgcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.948180 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-config-data" (OuterVolumeSpecName: "config-data") pod "5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" (UID: "5f924f6f-a352-49ec-bdd5-39bf0a65ed3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.955707 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" (UID: "5f924f6f-a352-49ec-bdd5-39bf0a65ed3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:17 crc kubenswrapper[4769]: I1006 07:35:17.995186 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" (UID: "5f924f6f-a352-49ec-bdd5-39bf0a65ed3e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.005710 4769 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.005862 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.005962 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbgcl\" (UniqueName: \"kubernetes.io/projected/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-kube-api-access-rbgcl\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.006058 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.006128 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.256480 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0989cc76c76aa106c9a5b6be82e0f7d7d48c0769f5f7fbc891c7ed9eb81f4d7b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.258384 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0989cc76c76aa106c9a5b6be82e0f7d7d48c0769f5f7fbc891c7ed9eb81f4d7b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.260054 4769 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0989cc76c76aa106c9a5b6be82e0f7d7d48c0769f5f7fbc891c7ed9eb81f4d7b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.260099 4769 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8d4d351d-b1dc-4010-988d-08528161f83e" containerName="nova-scheduler-scheduler" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.464452 4769 generic.go:334] "Generic (PLEG): container finished" podID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" containerID="9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273" exitCode=0 Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.464494 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e","Type":"ContainerDied","Data":"9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273"} Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.464843 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5f924f6f-a352-49ec-bdd5-39bf0a65ed3e","Type":"ContainerDied","Data":"76c15f0f7ae0e3d5ea3518ed6585ad41357c3d8307ab26d92dd32a49d756abdb"} Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.464524 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.464869 4769 scope.go:117] "RemoveContainer" containerID="9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.470192 4769 generic.go:334] "Generic (PLEG): container finished" podID="0b040638-542d-4dcb-bf8a-77993cb3a76f" containerID="c2d5507a994dcc33e04325a40e1557f51f62284afa629f8e0f25c4bea142352a" exitCode=0 Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.470231 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b040638-542d-4dcb-bf8a-77993cb3a76f","Type":"ContainerDied","Data":"c2d5507a994dcc33e04325a40e1557f51f62284afa629f8e0f25c4bea142352a"} Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.471131 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.487486 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.502548 4769 scope.go:117] "RemoveContainer" containerID="7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.511615 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.515311 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b040638-542d-4dcb-bf8a-77993cb3a76f-logs\") pod \"0b040638-542d-4dcb-bf8a-77993cb3a76f\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.515481 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-public-tls-certs\") pod \"0b040638-542d-4dcb-bf8a-77993cb3a76f\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.515524 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-config-data\") pod \"0b040638-542d-4dcb-bf8a-77993cb3a76f\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.515565 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-combined-ca-bundle\") pod \"0b040638-542d-4dcb-bf8a-77993cb3a76f\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.515590 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsc42\" (UniqueName: \"kubernetes.io/projected/0b040638-542d-4dcb-bf8a-77993cb3a76f-kube-api-access-vsc42\") pod \"0b040638-542d-4dcb-bf8a-77993cb3a76f\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.515767 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-internal-tls-certs\") pod \"0b040638-542d-4dcb-bf8a-77993cb3a76f\" (UID: \"0b040638-542d-4dcb-bf8a-77993cb3a76f\") " Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.515864 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b040638-542d-4dcb-bf8a-77993cb3a76f-logs" (OuterVolumeSpecName: "logs") pod "0b040638-542d-4dcb-bf8a-77993cb3a76f" (UID: "0b040638-542d-4dcb-bf8a-77993cb3a76f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.516176 4769 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b040638-542d-4dcb-bf8a-77993cb3a76f-logs\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.520707 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b040638-542d-4dcb-bf8a-77993cb3a76f-kube-api-access-vsc42" (OuterVolumeSpecName: "kube-api-access-vsc42") pod "0b040638-542d-4dcb-bf8a-77993cb3a76f" (UID: "0b040638-542d-4dcb-bf8a-77993cb3a76f"). InnerVolumeSpecName "kube-api-access-vsc42". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525007 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.525496 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b040638-542d-4dcb-bf8a-77993cb3a76f" containerName="nova-api-api" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525520 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b040638-542d-4dcb-bf8a-77993cb3a76f" containerName="nova-api-api" Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.525546 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17e6089-8fb3-4ff8-b603-7a266693936a" containerName="init" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525557 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17e6089-8fb3-4ff8-b603-7a266693936a" containerName="init" Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.525573 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" containerName="nova-metadata-metadata" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525581 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" containerName="nova-metadata-metadata" Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.525601 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b040638-542d-4dcb-bf8a-77993cb3a76f" containerName="nova-api-log" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525608 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b040638-542d-4dcb-bf8a-77993cb3a76f" containerName="nova-api-log" Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.525620 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" containerName="nova-metadata-log" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525627 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" containerName="nova-metadata-log" Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.525641 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d17e6089-8fb3-4ff8-b603-7a266693936a" containerName="dnsmasq-dns" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525649 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d17e6089-8fb3-4ff8-b603-7a266693936a" containerName="dnsmasq-dns" Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.525681 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0" containerName="nova-manage" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525689 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0" containerName="nova-manage" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525895 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b040638-542d-4dcb-bf8a-77993cb3a76f" containerName="nova-api-api" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525913 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0" containerName="nova-manage" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525922 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" containerName="nova-metadata-log" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525940 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" containerName="nova-metadata-metadata" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525955 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d17e6089-8fb3-4ff8-b603-7a266693936a" containerName="dnsmasq-dns" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.525963 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b040638-542d-4dcb-bf8a-77993cb3a76f" containerName="nova-api-log" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.528400 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.530250 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.531842 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.534820 4769 scope.go:117] "RemoveContainer" containerID="9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273" Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.535128 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273\": container with ID starting with 9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273 not found: ID does not exist" containerID="9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.535158 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273"} err="failed to get container status \"9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273\": rpc error: code = NotFound desc = could not find container \"9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273\": container with ID starting with 9819eeccc25a6f6ed5b38d67f285911d2315991480a72592463a6821820f4273 not found: ID does not exist" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.535179 4769 scope.go:117] "RemoveContainer" containerID="7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e" Oct 06 07:35:18 crc kubenswrapper[4769]: E1006 07:35:18.535708 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e\": container with ID starting with 7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e not found: ID does not exist" containerID="7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.535728 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e"} err="failed to get container status \"7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e\": rpc error: code = NotFound desc = could not find container \"7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e\": container with ID starting with 7209f2fc18629ca725d92e4642a5e61a14b2d942984671a8dd4db94e89a7720e not found: ID does not exist" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.541634 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-config-data" (OuterVolumeSpecName: "config-data") pod "0b040638-542d-4dcb-bf8a-77993cb3a76f" (UID: "0b040638-542d-4dcb-bf8a-77993cb3a76f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.556349 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.564156 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b040638-542d-4dcb-bf8a-77993cb3a76f" (UID: "0b040638-542d-4dcb-bf8a-77993cb3a76f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.569596 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0b040638-542d-4dcb-bf8a-77993cb3a76f" (UID: "0b040638-542d-4dcb-bf8a-77993cb3a76f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.576685 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0b040638-542d-4dcb-bf8a-77993cb3a76f" (UID: "0b040638-542d-4dcb-bf8a-77993cb3a76f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.618062 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f82871-7da7-45c1-abda-273fa82504af-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.618267 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f82871-7da7-45c1-abda-273fa82504af-logs\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.618396 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcldg\" (UniqueName: \"kubernetes.io/projected/b0f82871-7da7-45c1-abda-273fa82504af-kube-api-access-kcldg\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.618500 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f82871-7da7-45c1-abda-273fa82504af-config-data\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.618629 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f82871-7da7-45c1-abda-273fa82504af-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.618746 4769 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.618803 4769 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.618863 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.618936 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b040638-542d-4dcb-bf8a-77993cb3a76f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.618986 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsc42\" (UniqueName: \"kubernetes.io/projected/0b040638-542d-4dcb-bf8a-77993cb3a76f-kube-api-access-vsc42\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.720705 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcldg\" (UniqueName: \"kubernetes.io/projected/b0f82871-7da7-45c1-abda-273fa82504af-kube-api-access-kcldg\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.720756 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f82871-7da7-45c1-abda-273fa82504af-config-data\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.720832 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f82871-7da7-45c1-abda-273fa82504af-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.720878 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f82871-7da7-45c1-abda-273fa82504af-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.720909 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f82871-7da7-45c1-abda-273fa82504af-logs\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.721265 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0f82871-7da7-45c1-abda-273fa82504af-logs\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.724974 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0f82871-7da7-45c1-abda-273fa82504af-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.725247 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0f82871-7da7-45c1-abda-273fa82504af-config-data\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.725528 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0f82871-7da7-45c1-abda-273fa82504af-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.735968 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcldg\" (UniqueName: \"kubernetes.io/projected/b0f82871-7da7-45c1-abda-273fa82504af-kube-api-access-kcldg\") pod \"nova-metadata-0\" (UID: \"b0f82871-7da7-45c1-abda-273fa82504af\") " pod="openstack/nova-metadata-0" Oct 06 07:35:18 crc kubenswrapper[4769]: I1006 07:35:18.853265 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.137314 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 06 07:35:19 crc kubenswrapper[4769]: W1006 07:35:19.139188 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0f82871_7da7_45c1_abda_273fa82504af.slice/crio-44cc9650a080ef5bb904cc3b8c1e88a21bd8ad0efcc284074b856ce9f0be4890 WatchSource:0}: Error finding container 44cc9650a080ef5bb904cc3b8c1e88a21bd8ad0efcc284074b856ce9f0be4890: Status 404 returned error can't find the container with id 44cc9650a080ef5bb904cc3b8c1e88a21bd8ad0efcc284074b856ce9f0be4890 Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.481515 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b040638-542d-4dcb-bf8a-77993cb3a76f","Type":"ContainerDied","Data":"13678422ad7de5e37cabeceb14e440f0fbb4d93d82d298a5ef6aecff5dc770c8"} Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.482835 4769 scope.go:117] "RemoveContainer" containerID="c2d5507a994dcc33e04325a40e1557f51f62284afa629f8e0f25c4bea142352a" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.481559 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.484285 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0f82871-7da7-45c1-abda-273fa82504af","Type":"ContainerStarted","Data":"db8614ce305921b1477195ee91622e0afbb340b779095e5a3351655bd0fa31ef"} Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.484348 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0f82871-7da7-45c1-abda-273fa82504af","Type":"ContainerStarted","Data":"44cc9650a080ef5bb904cc3b8c1e88a21bd8ad0efcc284074b856ce9f0be4890"} Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.504057 4769 scope.go:117] "RemoveContainer" containerID="19071b093a1fe318a89b6c76d343124fe042e186c0b223b49896fcd34a6a68b8" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.519339 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.530345 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.539826 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.543871 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.548352 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.548749 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.548826 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.568781 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.638510 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3f83fc-48c6-4323-83be-b39bc9529799-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.638546 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a3f83fc-48c6-4323-83be-b39bc9529799-logs\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.638602 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a3f83fc-48c6-4323-83be-b39bc9529799-config-data\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.638642 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3f83fc-48c6-4323-83be-b39bc9529799-public-tls-certs\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.638679 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgjxl\" (UniqueName: \"kubernetes.io/projected/7a3f83fc-48c6-4323-83be-b39bc9529799-kube-api-access-dgjxl\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.638703 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3f83fc-48c6-4323-83be-b39bc9529799-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.740493 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a3f83fc-48c6-4323-83be-b39bc9529799-config-data\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.740548 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3f83fc-48c6-4323-83be-b39bc9529799-public-tls-certs\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.740589 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgjxl\" (UniqueName: \"kubernetes.io/projected/7a3f83fc-48c6-4323-83be-b39bc9529799-kube-api-access-dgjxl\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.740612 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3f83fc-48c6-4323-83be-b39bc9529799-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.740688 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3f83fc-48c6-4323-83be-b39bc9529799-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.740706 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a3f83fc-48c6-4323-83be-b39bc9529799-logs\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.741149 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a3f83fc-48c6-4323-83be-b39bc9529799-logs\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.745756 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3f83fc-48c6-4323-83be-b39bc9529799-public-tls-certs\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.745909 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a3f83fc-48c6-4323-83be-b39bc9529799-config-data\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.746568 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a3f83fc-48c6-4323-83be-b39bc9529799-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.747922 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a3f83fc-48c6-4323-83be-b39bc9529799-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.761544 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgjxl\" (UniqueName: \"kubernetes.io/projected/7a3f83fc-48c6-4323-83be-b39bc9529799-kube-api-access-dgjxl\") pod \"nova-api-0\" (UID: \"7a3f83fc-48c6-4323-83be-b39bc9529799\") " pod="openstack/nova-api-0" Oct 06 07:35:19 crc kubenswrapper[4769]: I1006 07:35:19.882999 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 06 07:35:20 crc kubenswrapper[4769]: I1006 07:35:20.178012 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b040638-542d-4dcb-bf8a-77993cb3a76f" path="/var/lib/kubelet/pods/0b040638-542d-4dcb-bf8a-77993cb3a76f/volumes" Oct 06 07:35:20 crc kubenswrapper[4769]: I1006 07:35:20.179313 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f924f6f-a352-49ec-bdd5-39bf0a65ed3e" path="/var/lib/kubelet/pods/5f924f6f-a352-49ec-bdd5-39bf0a65ed3e/volumes" Oct 06 07:35:20 crc kubenswrapper[4769]: I1006 07:35:20.370481 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 06 07:35:20 crc kubenswrapper[4769]: I1006 07:35:20.510340 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0f82871-7da7-45c1-abda-273fa82504af","Type":"ContainerStarted","Data":"c47c3182b88fd30921616ed4e32623f06538e32dae7616f85e99d535f7b6f0eb"} Oct 06 07:35:20 crc kubenswrapper[4769]: I1006 07:35:20.515204 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a3f83fc-48c6-4323-83be-b39bc9529799","Type":"ContainerStarted","Data":"f107853d213d5b245ef1481058a87c09365a88108647e27bc529bd822331d175"} Oct 06 07:35:20 crc kubenswrapper[4769]: I1006 07:35:20.538091 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.538070049 podStartE2EDuration="2.538070049s" podCreationTimestamp="2025-10-06 07:35:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:35:20.527632033 +0000 UTC m=+1117.051913190" watchObservedRunningTime="2025-10-06 07:35:20.538070049 +0000 UTC m=+1117.062351196" Oct 06 07:35:21 crc kubenswrapper[4769]: I1006 07:35:21.529450 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a3f83fc-48c6-4323-83be-b39bc9529799","Type":"ContainerStarted","Data":"74094909ae213ec181aaa9042cdc5d08b9d8705c175ed67b103463b778311ce6"} Oct 06 07:35:21 crc kubenswrapper[4769]: I1006 07:35:21.529507 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7a3f83fc-48c6-4323-83be-b39bc9529799","Type":"ContainerStarted","Data":"e1e9e3a97ade5685616fa36aed3f2dfb2e1690c310077482bdac0d5a89f04b57"} Oct 06 07:35:21 crc kubenswrapper[4769]: I1006 07:35:21.553853 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5538246129999997 podStartE2EDuration="2.553824613s" podCreationTimestamp="2025-10-06 07:35:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:35:21.544495468 +0000 UTC m=+1118.068776655" watchObservedRunningTime="2025-10-06 07:35:21.553824613 +0000 UTC m=+1118.078105800" Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.245273 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.245638 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.540562 4769 generic.go:334] "Generic (PLEG): container finished" podID="8d4d351d-b1dc-4010-988d-08528161f83e" containerID="0989cc76c76aa106c9a5b6be82e0f7d7d48c0769f5f7fbc891c7ed9eb81f4d7b" exitCode=0 Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.540683 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8d4d351d-b1dc-4010-988d-08528161f83e","Type":"ContainerDied","Data":"0989cc76c76aa106c9a5b6be82e0f7d7d48c0769f5f7fbc891c7ed9eb81f4d7b"} Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.540741 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8d4d351d-b1dc-4010-988d-08528161f83e","Type":"ContainerDied","Data":"58ba34bb01e162de55ecd02c9d264726d4b9b77916c79e02dec7a0b9b0fd18e1"} Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.540762 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58ba34bb01e162de55ecd02c9d264726d4b9b77916c79e02dec7a0b9b0fd18e1" Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.545825 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.617762 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tjsd\" (UniqueName: \"kubernetes.io/projected/8d4d351d-b1dc-4010-988d-08528161f83e-kube-api-access-7tjsd\") pod \"8d4d351d-b1dc-4010-988d-08528161f83e\" (UID: \"8d4d351d-b1dc-4010-988d-08528161f83e\") " Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.617874 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4d351d-b1dc-4010-988d-08528161f83e-config-data\") pod \"8d4d351d-b1dc-4010-988d-08528161f83e\" (UID: \"8d4d351d-b1dc-4010-988d-08528161f83e\") " Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.618125 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d351d-b1dc-4010-988d-08528161f83e-combined-ca-bundle\") pod \"8d4d351d-b1dc-4010-988d-08528161f83e\" (UID: \"8d4d351d-b1dc-4010-988d-08528161f83e\") " Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.625122 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d4d351d-b1dc-4010-988d-08528161f83e-kube-api-access-7tjsd" (OuterVolumeSpecName: "kube-api-access-7tjsd") pod "8d4d351d-b1dc-4010-988d-08528161f83e" (UID: "8d4d351d-b1dc-4010-988d-08528161f83e"). InnerVolumeSpecName "kube-api-access-7tjsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.647052 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4d351d-b1dc-4010-988d-08528161f83e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d4d351d-b1dc-4010-988d-08528161f83e" (UID: "8d4d351d-b1dc-4010-988d-08528161f83e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.651961 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d4d351d-b1dc-4010-988d-08528161f83e-config-data" (OuterVolumeSpecName: "config-data") pod "8d4d351d-b1dc-4010-988d-08528161f83e" (UID: "8d4d351d-b1dc-4010-988d-08528161f83e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.721543 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d4d351d-b1dc-4010-988d-08528161f83e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.721590 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tjsd\" (UniqueName: \"kubernetes.io/projected/8d4d351d-b1dc-4010-988d-08528161f83e-kube-api-access-7tjsd\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:22 crc kubenswrapper[4769]: I1006 07:35:22.721605 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d4d351d-b1dc-4010-988d-08528161f83e-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.552293 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.596894 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.611652 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.625406 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:35:23 crc kubenswrapper[4769]: E1006 07:35:23.626142 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d4d351d-b1dc-4010-988d-08528161f83e" containerName="nova-scheduler-scheduler" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.626166 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d4d351d-b1dc-4010-988d-08528161f83e" containerName="nova-scheduler-scheduler" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.626547 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d4d351d-b1dc-4010-988d-08528161f83e" containerName="nova-scheduler-scheduler" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.627767 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.632883 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.659338 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.742615 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79aebe36-4b56-4b00-a3ff-0dd2965702c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79aebe36-4b56-4b00-a3ff-0dd2965702c8\") " pod="openstack/nova-scheduler-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.742727 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvqhf\" (UniqueName: \"kubernetes.io/projected/79aebe36-4b56-4b00-a3ff-0dd2965702c8-kube-api-access-wvqhf\") pod \"nova-scheduler-0\" (UID: \"79aebe36-4b56-4b00-a3ff-0dd2965702c8\") " pod="openstack/nova-scheduler-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.742794 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79aebe36-4b56-4b00-a3ff-0dd2965702c8-config-data\") pod \"nova-scheduler-0\" (UID: \"79aebe36-4b56-4b00-a3ff-0dd2965702c8\") " pod="openstack/nova-scheduler-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.844505 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvqhf\" (UniqueName: \"kubernetes.io/projected/79aebe36-4b56-4b00-a3ff-0dd2965702c8-kube-api-access-wvqhf\") pod \"nova-scheduler-0\" (UID: \"79aebe36-4b56-4b00-a3ff-0dd2965702c8\") " pod="openstack/nova-scheduler-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.844598 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79aebe36-4b56-4b00-a3ff-0dd2965702c8-config-data\") pod \"nova-scheduler-0\" (UID: \"79aebe36-4b56-4b00-a3ff-0dd2965702c8\") " pod="openstack/nova-scheduler-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.844706 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79aebe36-4b56-4b00-a3ff-0dd2965702c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79aebe36-4b56-4b00-a3ff-0dd2965702c8\") " pod="openstack/nova-scheduler-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.849985 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79aebe36-4b56-4b00-a3ff-0dd2965702c8-config-data\") pod \"nova-scheduler-0\" (UID: \"79aebe36-4b56-4b00-a3ff-0dd2965702c8\") " pod="openstack/nova-scheduler-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.851610 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79aebe36-4b56-4b00-a3ff-0dd2965702c8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"79aebe36-4b56-4b00-a3ff-0dd2965702c8\") " pod="openstack/nova-scheduler-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.854066 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.854148 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.862633 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvqhf\" (UniqueName: \"kubernetes.io/projected/79aebe36-4b56-4b00-a3ff-0dd2965702c8-kube-api-access-wvqhf\") pod \"nova-scheduler-0\" (UID: \"79aebe36-4b56-4b00-a3ff-0dd2965702c8\") " pod="openstack/nova-scheduler-0" Oct 06 07:35:23 crc kubenswrapper[4769]: I1006 07:35:23.963416 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 06 07:35:24 crc kubenswrapper[4769]: I1006 07:35:24.176626 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d4d351d-b1dc-4010-988d-08528161f83e" path="/var/lib/kubelet/pods/8d4d351d-b1dc-4010-988d-08528161f83e/volumes" Oct 06 07:35:24 crc kubenswrapper[4769]: I1006 07:35:24.394920 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 06 07:35:24 crc kubenswrapper[4769]: W1006 07:35:24.394955 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79aebe36_4b56_4b00_a3ff_0dd2965702c8.slice/crio-fe6e4551208e2357d35f9a638eb9e4d1a016689ce92e5f30d8caf32aa4163a93 WatchSource:0}: Error finding container fe6e4551208e2357d35f9a638eb9e4d1a016689ce92e5f30d8caf32aa4163a93: Status 404 returned error can't find the container with id fe6e4551208e2357d35f9a638eb9e4d1a016689ce92e5f30d8caf32aa4163a93 Oct 06 07:35:24 crc kubenswrapper[4769]: I1006 07:35:24.562031 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79aebe36-4b56-4b00-a3ff-0dd2965702c8","Type":"ContainerStarted","Data":"fe6e4551208e2357d35f9a638eb9e4d1a016689ce92e5f30d8caf32aa4163a93"} Oct 06 07:35:25 crc kubenswrapper[4769]: I1006 07:35:25.583087 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"79aebe36-4b56-4b00-a3ff-0dd2965702c8","Type":"ContainerStarted","Data":"358d6edfc630f5f1d0956dfd65ab16630179f867cb0d4dd53380f81f968157e5"} Oct 06 07:35:25 crc kubenswrapper[4769]: I1006 07:35:25.604636 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.604618588 podStartE2EDuration="2.604618588s" podCreationTimestamp="2025-10-06 07:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:35:25.599294202 +0000 UTC m=+1122.123575379" watchObservedRunningTime="2025-10-06 07:35:25.604618588 +0000 UTC m=+1122.128899735" Oct 06 07:35:28 crc kubenswrapper[4769]: I1006 07:35:28.853448 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 07:35:28 crc kubenswrapper[4769]: I1006 07:35:28.853844 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 06 07:35:28 crc kubenswrapper[4769]: I1006 07:35:28.964180 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 06 07:35:29 crc kubenswrapper[4769]: I1006 07:35:29.871787 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0f82871-7da7-45c1-abda-273fa82504af" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 07:35:29 crc kubenswrapper[4769]: I1006 07:35:29.871810 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0f82871-7da7-45c1-abda-273fa82504af" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 07:35:29 crc kubenswrapper[4769]: I1006 07:35:29.884066 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 07:35:29 crc kubenswrapper[4769]: I1006 07:35:29.884137 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 06 07:35:30 crc kubenswrapper[4769]: I1006 07:35:30.899576 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7a3f83fc-48c6-4323-83be-b39bc9529799" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 07:35:30 crc kubenswrapper[4769]: I1006 07:35:30.899953 4769 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7a3f83fc-48c6-4323-83be-b39bc9529799" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 06 07:35:33 crc kubenswrapper[4769]: I1006 07:35:33.695495 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 06 07:35:33 crc kubenswrapper[4769]: I1006 07:35:33.964126 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 06 07:35:33 crc kubenswrapper[4769]: I1006 07:35:33.992613 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 06 07:35:34 crc kubenswrapper[4769]: I1006 07:35:34.693054 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 06 07:35:38 crc kubenswrapper[4769]: I1006 07:35:38.861338 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 07:35:38 crc kubenswrapper[4769]: I1006 07:35:38.867454 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 07:35:38 crc kubenswrapper[4769]: I1006 07:35:38.872567 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 06 07:35:39 crc kubenswrapper[4769]: I1006 07:35:39.715135 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 06 07:35:39 crc kubenswrapper[4769]: I1006 07:35:39.892812 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 07:35:39 crc kubenswrapper[4769]: I1006 07:35:39.893406 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 07:35:39 crc kubenswrapper[4769]: I1006 07:35:39.902576 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 07:35:39 crc kubenswrapper[4769]: I1006 07:35:39.902714 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 06 07:35:40 crc kubenswrapper[4769]: I1006 07:35:40.713641 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 06 07:35:40 crc kubenswrapper[4769]: I1006 07:35:40.728402 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 06 07:35:50 crc kubenswrapper[4769]: I1006 07:35:50.545238 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 07:35:51 crc kubenswrapper[4769]: I1006 07:35:51.303546 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 07:35:52 crc kubenswrapper[4769]: I1006 07:35:52.246231 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:35:52 crc kubenswrapper[4769]: I1006 07:35:52.246313 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:35:52 crc kubenswrapper[4769]: I1006 07:35:52.246367 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:35:52 crc kubenswrapper[4769]: I1006 07:35:52.247247 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"47403349fa0900a44adea3739e07d33b0b59adbbdb84ed4f63521b0ae42276d3"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 07:35:52 crc kubenswrapper[4769]: I1006 07:35:52.247321 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://47403349fa0900a44adea3739e07d33b0b59adbbdb84ed4f63521b0ae42276d3" gracePeriod=600 Oct 06 07:35:52 crc kubenswrapper[4769]: I1006 07:35:52.848153 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="47403349fa0900a44adea3739e07d33b0b59adbbdb84ed4f63521b0ae42276d3" exitCode=0 Oct 06 07:35:52 crc kubenswrapper[4769]: I1006 07:35:52.848245 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"47403349fa0900a44adea3739e07d33b0b59adbbdb84ed4f63521b0ae42276d3"} Oct 06 07:35:52 crc kubenswrapper[4769]: I1006 07:35:52.848478 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"a10ffbcd338af3006a90bff9475390df2aa79af988aec76bc98f6ad864fb7a59"} Oct 06 07:35:52 crc kubenswrapper[4769]: I1006 07:35:52.848506 4769 scope.go:117] "RemoveContainer" containerID="e27a02d72597d106a59d046fdaac952f8bf136a5f46d0da4b4605986d1c55ede" Oct 06 07:35:53 crc kubenswrapper[4769]: I1006 07:35:53.852526 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="c556df6a-9389-4852-b1d8-ba7bbf8bc614" containerName="rabbitmq" containerID="cri-o://bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299" gracePeriod=604797 Oct 06 07:35:53 crc kubenswrapper[4769]: I1006 07:35:53.871599 4769 scope.go:117] "RemoveContainer" containerID="448e1f67f3dc96e9600e380e09c6aa33fcdcc9654dbdcfbb624441257a49fc21" Oct 06 07:35:53 crc kubenswrapper[4769]: I1006 07:35:53.897345 4769 scope.go:117] "RemoveContainer" containerID="a4c54210ee1e155f789433cf7f2e69ee1b7ffdb5f7a160b6b8ca8ae68ef30e84" Oct 06 07:35:53 crc kubenswrapper[4769]: I1006 07:35:53.933192 4769 scope.go:117] "RemoveContainer" containerID="fad6abc420bcbf81eeb4d3e342f63277564df3052411b78bde7bcd84c7de962d" Oct 06 07:35:54 crc kubenswrapper[4769]: I1006 07:35:54.488705 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="654c5c70-fc54-4a56-9fb8-c1ffe32089ca" containerName="rabbitmq" containerID="cri-o://cf574ca636813e8a4cbc28e21ec7c71b1f037690fa33d684e5abc31b57e31f77" gracePeriod=604797 Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.485573 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.557362 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c556df6a-9389-4852-b1d8-ba7bbf8bc614-erlang-cookie-secret\") pod \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.557403 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfdqt\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-kube-api-access-rfdqt\") pod \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.557621 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-confd\") pod \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.557672 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c556df6a-9389-4852-b1d8-ba7bbf8bc614-pod-info\") pod \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.557706 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.557741 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-server-conf\") pod \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.557756 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-config-data\") pod \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.557784 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-tls\") pod \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.557825 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-erlang-cookie\") pod \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.557864 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-plugins\") pod \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.557901 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-plugins-conf\") pod \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\" (UID: \"c556df6a-9389-4852-b1d8-ba7bbf8bc614\") " Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.559145 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c556df6a-9389-4852-b1d8-ba7bbf8bc614" (UID: "c556df6a-9389-4852-b1d8-ba7bbf8bc614"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.560314 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c556df6a-9389-4852-b1d8-ba7bbf8bc614" (UID: "c556df6a-9389-4852-b1d8-ba7bbf8bc614"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.560781 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c556df6a-9389-4852-b1d8-ba7bbf8bc614" (UID: "c556df6a-9389-4852-b1d8-ba7bbf8bc614"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.566545 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-kube-api-access-rfdqt" (OuterVolumeSpecName: "kube-api-access-rfdqt") pod "c556df6a-9389-4852-b1d8-ba7bbf8bc614" (UID: "c556df6a-9389-4852-b1d8-ba7bbf8bc614"). InnerVolumeSpecName "kube-api-access-rfdqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.567363 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c556df6a-9389-4852-b1d8-ba7bbf8bc614" (UID: "c556df6a-9389-4852-b1d8-ba7bbf8bc614"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.570606 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c556df6a-9389-4852-b1d8-ba7bbf8bc614-pod-info" (OuterVolumeSpecName: "pod-info") pod "c556df6a-9389-4852-b1d8-ba7bbf8bc614" (UID: "c556df6a-9389-4852-b1d8-ba7bbf8bc614"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.570686 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "c556df6a-9389-4852-b1d8-ba7bbf8bc614" (UID: "c556df6a-9389-4852-b1d8-ba7bbf8bc614"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.572816 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c556df6a-9389-4852-b1d8-ba7bbf8bc614-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c556df6a-9389-4852-b1d8-ba7bbf8bc614" (UID: "c556df6a-9389-4852-b1d8-ba7bbf8bc614"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.622569 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-config-data" (OuterVolumeSpecName: "config-data") pod "c556df6a-9389-4852-b1d8-ba7bbf8bc614" (UID: "c556df6a-9389-4852-b1d8-ba7bbf8bc614"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.643457 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-server-conf" (OuterVolumeSpecName: "server-conf") pod "c556df6a-9389-4852-b1d8-ba7bbf8bc614" (UID: "c556df6a-9389-4852-b1d8-ba7bbf8bc614"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.659868 4769 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c556df6a-9389-4852-b1d8-ba7bbf8bc614-pod-info\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.660109 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.664435 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.664594 4769 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-server-conf\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.664682 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.664774 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.664865 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.664950 4769 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c556df6a-9389-4852-b1d8-ba7bbf8bc614-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.665038 4769 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c556df6a-9389-4852-b1d8-ba7bbf8bc614-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.665118 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfdqt\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-kube-api-access-rfdqt\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.683922 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.711641 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c556df6a-9389-4852-b1d8-ba7bbf8bc614" (UID: "c556df6a-9389-4852-b1d8-ba7bbf8bc614"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.767234 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c556df6a-9389-4852-b1d8-ba7bbf8bc614-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.767473 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.893804 4769 generic.go:334] "Generic (PLEG): container finished" podID="c556df6a-9389-4852-b1d8-ba7bbf8bc614" containerID="bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299" exitCode=0 Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.893904 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.893921 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c556df6a-9389-4852-b1d8-ba7bbf8bc614","Type":"ContainerDied","Data":"bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299"} Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.899049 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c556df6a-9389-4852-b1d8-ba7bbf8bc614","Type":"ContainerDied","Data":"4a5f3aa4d95353bf100a70182fbbb491e660051421b82fa44be4615908fa66f8"} Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.899078 4769 scope.go:117] "RemoveContainer" containerID="bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.904690 4769 generic.go:334] "Generic (PLEG): container finished" podID="654c5c70-fc54-4a56-9fb8-c1ffe32089ca" containerID="cf574ca636813e8a4cbc28e21ec7c71b1f037690fa33d684e5abc31b57e31f77" exitCode=0 Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.904726 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"654c5c70-fc54-4a56-9fb8-c1ffe32089ca","Type":"ContainerDied","Data":"cf574ca636813e8a4cbc28e21ec7c71b1f037690fa33d684e5abc31b57e31f77"} Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.943002 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.952021 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.954148 4769 scope.go:117] "RemoveContainer" containerID="1c5698815f65af07134574db2f002b37087188d4d11f54541949c9762353cdd7" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.976304 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 07:35:55 crc kubenswrapper[4769]: E1006 07:35:55.976728 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c556df6a-9389-4852-b1d8-ba7bbf8bc614" containerName="setup-container" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.976739 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c556df6a-9389-4852-b1d8-ba7bbf8bc614" containerName="setup-container" Oct 06 07:35:55 crc kubenswrapper[4769]: E1006 07:35:55.976778 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c556df6a-9389-4852-b1d8-ba7bbf8bc614" containerName="rabbitmq" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.976784 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c556df6a-9389-4852-b1d8-ba7bbf8bc614" containerName="rabbitmq" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.979836 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c556df6a-9389-4852-b1d8-ba7bbf8bc614" containerName="rabbitmq" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.981105 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.983607 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.983812 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.984174 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-h5gd7" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.984252 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.984325 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.984347 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.985555 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.990076 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.995064 4769 scope.go:117] "RemoveContainer" containerID="bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299" Oct 06 07:35:55 crc kubenswrapper[4769]: E1006 07:35:55.995502 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299\": container with ID starting with bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299 not found: ID does not exist" containerID="bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.995563 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299"} err="failed to get container status \"bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299\": rpc error: code = NotFound desc = could not find container \"bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299\": container with ID starting with bd5ee281cefdb11d5c899f133129637d92edb17fc776327e6475baf712525299 not found: ID does not exist" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.995591 4769 scope.go:117] "RemoveContainer" containerID="1c5698815f65af07134574db2f002b37087188d4d11f54541949c9762353cdd7" Oct 06 07:35:55 crc kubenswrapper[4769]: E1006 07:35:55.995805 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c5698815f65af07134574db2f002b37087188d4d11f54541949c9762353cdd7\": container with ID starting with 1c5698815f65af07134574db2f002b37087188d4d11f54541949c9762353cdd7 not found: ID does not exist" containerID="1c5698815f65af07134574db2f002b37087188d4d11f54541949c9762353cdd7" Oct 06 07:35:55 crc kubenswrapper[4769]: I1006 07:35:55.995822 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c5698815f65af07134574db2f002b37087188d4d11f54541949c9762353cdd7"} err="failed to get container status \"1c5698815f65af07134574db2f002b37087188d4d11f54541949c9762353cdd7\": rpc error: code = NotFound desc = could not find container \"1c5698815f65af07134574db2f002b37087188d4d11f54541949c9762353cdd7\": container with ID starting with 1c5698815f65af07134574db2f002b37087188d4d11f54541949c9762353cdd7 not found: ID does not exist" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.073883 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e5db2de8-5580-43f3-aa10-3a1cc7806fba-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.074216 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e5db2de8-5580-43f3-aa10-3a1cc7806fba-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.074292 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e5db2de8-5580-43f3-aa10-3a1cc7806fba-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.074315 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e5db2de8-5580-43f3-aa10-3a1cc7806fba-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.074348 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5db2de8-5580-43f3-aa10-3a1cc7806fba-config-data\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.074371 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqbf\" (UniqueName: \"kubernetes.io/projected/e5db2de8-5580-43f3-aa10-3a1cc7806fba-kube-api-access-njqbf\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.074396 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e5db2de8-5580-43f3-aa10-3a1cc7806fba-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.074418 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e5db2de8-5580-43f3-aa10-3a1cc7806fba-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.074463 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e5db2de8-5580-43f3-aa10-3a1cc7806fba-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.074517 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e5db2de8-5580-43f3-aa10-3a1cc7806fba-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.074570 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.179460 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c556df6a-9389-4852-b1d8-ba7bbf8bc614" path="/var/lib/kubelet/pods/c556df6a-9389-4852-b1d8-ba7bbf8bc614/volumes" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.181137 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e5db2de8-5580-43f3-aa10-3a1cc7806fba-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.182244 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e5db2de8-5580-43f3-aa10-3a1cc7806fba-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.182317 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e5db2de8-5580-43f3-aa10-3a1cc7806fba-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.182375 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.182453 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e5db2de8-5580-43f3-aa10-3a1cc7806fba-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.182486 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e5db2de8-5580-43f3-aa10-3a1cc7806fba-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.182561 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e5db2de8-5580-43f3-aa10-3a1cc7806fba-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.182589 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e5db2de8-5580-43f3-aa10-3a1cc7806fba-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.182626 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5db2de8-5580-43f3-aa10-3a1cc7806fba-config-data\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.182656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njqbf\" (UniqueName: \"kubernetes.io/projected/e5db2de8-5580-43f3-aa10-3a1cc7806fba-kube-api-access-njqbf\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.182685 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e5db2de8-5580-43f3-aa10-3a1cc7806fba-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.182913 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.183898 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e5db2de8-5580-43f3-aa10-3a1cc7806fba-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.185322 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e5db2de8-5580-43f3-aa10-3a1cc7806fba-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.185596 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e5db2de8-5580-43f3-aa10-3a1cc7806fba-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.186385 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e5db2de8-5580-43f3-aa10-3a1cc7806fba-config-data\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.186965 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e5db2de8-5580-43f3-aa10-3a1cc7806fba-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.187274 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e5db2de8-5580-43f3-aa10-3a1cc7806fba-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.187923 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e5db2de8-5580-43f3-aa10-3a1cc7806fba-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.198837 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e5db2de8-5580-43f3-aa10-3a1cc7806fba-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.200136 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e5db2de8-5580-43f3-aa10-3a1cc7806fba-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.204732 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njqbf\" (UniqueName: \"kubernetes.io/projected/e5db2de8-5580-43f3-aa10-3a1cc7806fba-kube-api-access-njqbf\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.239283 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"e5db2de8-5580-43f3-aa10-3a1cc7806fba\") " pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.276350 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.305531 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.385647 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-plugins\") pod \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.385704 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-erlang-cookie\") pod \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.385731 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-server-conf\") pod \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.385770 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-tls\") pod \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.385819 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.385853 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-confd\") pod \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.385895 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-plugins-conf\") pod \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.385956 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-config-data\") pod \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.385983 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-erlang-cookie-secret\") pod \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.386016 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-pod-info\") pod \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.386050 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g4lj\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-kube-api-access-2g4lj\") pod \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\" (UID: \"654c5c70-fc54-4a56-9fb8-c1ffe32089ca\") " Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.387046 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "654c5c70-fc54-4a56-9fb8-c1ffe32089ca" (UID: "654c5c70-fc54-4a56-9fb8-c1ffe32089ca"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.387462 4769 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.388211 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "654c5c70-fc54-4a56-9fb8-c1ffe32089ca" (UID: "654c5c70-fc54-4a56-9fb8-c1ffe32089ca"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.388973 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "654c5c70-fc54-4a56-9fb8-c1ffe32089ca" (UID: "654c5c70-fc54-4a56-9fb8-c1ffe32089ca"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.392497 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-pod-info" (OuterVolumeSpecName: "pod-info") pod "654c5c70-fc54-4a56-9fb8-c1ffe32089ca" (UID: "654c5c70-fc54-4a56-9fb8-c1ffe32089ca"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.393715 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "654c5c70-fc54-4a56-9fb8-c1ffe32089ca" (UID: "654c5c70-fc54-4a56-9fb8-c1ffe32089ca"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.394293 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "654c5c70-fc54-4a56-9fb8-c1ffe32089ca" (UID: "654c5c70-fc54-4a56-9fb8-c1ffe32089ca"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.398581 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-kube-api-access-2g4lj" (OuterVolumeSpecName: "kube-api-access-2g4lj") pod "654c5c70-fc54-4a56-9fb8-c1ffe32089ca" (UID: "654c5c70-fc54-4a56-9fb8-c1ffe32089ca"). InnerVolumeSpecName "kube-api-access-2g4lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.400539 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "654c5c70-fc54-4a56-9fb8-c1ffe32089ca" (UID: "654c5c70-fc54-4a56-9fb8-c1ffe32089ca"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.444854 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-config-data" (OuterVolumeSpecName: "config-data") pod "654c5c70-fc54-4a56-9fb8-c1ffe32089ca" (UID: "654c5c70-fc54-4a56-9fb8-c1ffe32089ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.477011 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-server-conf" (OuterVolumeSpecName: "server-conf") pod "654c5c70-fc54-4a56-9fb8-c1ffe32089ca" (UID: "654c5c70-fc54-4a56-9fb8-c1ffe32089ca"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.489495 4769 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-server-conf\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.489523 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.489552 4769 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.489563 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.489578 4769 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.489590 4769 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-pod-info\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.489601 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g4lj\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-kube-api-access-2g4lj\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.489614 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.489626 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.517636 4769 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.570592 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "654c5c70-fc54-4a56-9fb8-c1ffe32089ca" (UID: "654c5c70-fc54-4a56-9fb8-c1ffe32089ca"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.591265 4769 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.591301 4769 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/654c5c70-fc54-4a56-9fb8-c1ffe32089ca-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.817887 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 06 07:35:56 crc kubenswrapper[4769]: W1006 07:35:56.824094 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5db2de8_5580_43f3_aa10_3a1cc7806fba.slice/crio-c5e2bc7eb6d4ee2b4335495b72cfbfdce1bc471d43083884eb0687cff8f174ce WatchSource:0}: Error finding container c5e2bc7eb6d4ee2b4335495b72cfbfdce1bc471d43083884eb0687cff8f174ce: Status 404 returned error can't find the container with id c5e2bc7eb6d4ee2b4335495b72cfbfdce1bc471d43083884eb0687cff8f174ce Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.925845 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"654c5c70-fc54-4a56-9fb8-c1ffe32089ca","Type":"ContainerDied","Data":"84e3fa8682185fdf214584b4dacb178239fcf9b262c7999309201d3eb48121b5"} Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.925899 4769 scope.go:117] "RemoveContainer" containerID="cf574ca636813e8a4cbc28e21ec7c71b1f037690fa33d684e5abc31b57e31f77" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.925918 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.930594 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e5db2de8-5580-43f3-aa10-3a1cc7806fba","Type":"ContainerStarted","Data":"c5e2bc7eb6d4ee2b4335495b72cfbfdce1bc471d43083884eb0687cff8f174ce"} Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.973323 4769 scope.go:117] "RemoveContainer" containerID="3357c991a8b7085ad0995bf2bb18c4579abdee907c0b8d5108a27ad92d925d5a" Oct 06 07:35:56 crc kubenswrapper[4769]: I1006 07:35:56.987268 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.002448 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.011523 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 07:35:57 crc kubenswrapper[4769]: E1006 07:35:57.011890 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654c5c70-fc54-4a56-9fb8-c1ffe32089ca" containerName="rabbitmq" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.011905 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="654c5c70-fc54-4a56-9fb8-c1ffe32089ca" containerName="rabbitmq" Oct 06 07:35:57 crc kubenswrapper[4769]: E1006 07:35:57.011945 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="654c5c70-fc54-4a56-9fb8-c1ffe32089ca" containerName="setup-container" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.011953 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="654c5c70-fc54-4a56-9fb8-c1ffe32089ca" containerName="setup-container" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.012138 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="654c5c70-fc54-4a56-9fb8-c1ffe32089ca" containerName="rabbitmq" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.013099 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.015992 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.016164 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.016258 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.016354 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.016524 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.016690 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.017048 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9p6xc" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.041410 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.098416 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.098677 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.098775 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.098889 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.098991 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.099114 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.099236 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.099336 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.099456 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.099563 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.099659 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm4rr\" (UniqueName: \"kubernetes.io/projected/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-kube-api-access-fm4rr\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.201438 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.202100 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.202168 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.202238 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.202302 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.202456 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.202559 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.202633 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.202701 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.202773 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.202846 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm4rr\" (UniqueName: \"kubernetes.io/projected/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-kube-api-access-fm4rr\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.204017 4769 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.204984 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.205228 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.204745 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.206539 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.207155 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.208073 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.208634 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.208643 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.218194 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.220469 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm4rr\" (UniqueName: \"kubernetes.io/projected/8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6-kube-api-access-fm4rr\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.245706 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6\") " pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.339998 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.858388 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 06 07:35:57 crc kubenswrapper[4769]: W1006 07:35:57.858802 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b20fe41_af9c_40f2_aeae_ee1cd6c56bf6.slice/crio-7972bc27b85a2bca53d5fec5b7d26f60cfc9e66d7282595126ab20551d4832f5 WatchSource:0}: Error finding container 7972bc27b85a2bca53d5fec5b7d26f60cfc9e66d7282595126ab20551d4832f5: Status 404 returned error can't find the container with id 7972bc27b85a2bca53d5fec5b7d26f60cfc9e66d7282595126ab20551d4832f5 Oct 06 07:35:57 crc kubenswrapper[4769]: I1006 07:35:57.942626 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6","Type":"ContainerStarted","Data":"7972bc27b85a2bca53d5fec5b7d26f60cfc9e66d7282595126ab20551d4832f5"} Oct 06 07:35:58 crc kubenswrapper[4769]: I1006 07:35:58.175670 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="654c5c70-fc54-4a56-9fb8-c1ffe32089ca" path="/var/lib/kubelet/pods/654c5c70-fc54-4a56-9fb8-c1ffe32089ca/volumes" Oct 06 07:35:58 crc kubenswrapper[4769]: I1006 07:35:58.953498 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e5db2de8-5580-43f3-aa10-3a1cc7806fba","Type":"ContainerStarted","Data":"15f06c082fe0916094b786cc3a5f683aa632d02b8fa055e616ece7a980b53bb0"} Oct 06 07:35:59 crc kubenswrapper[4769]: I1006 07:35:59.962413 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6","Type":"ContainerStarted","Data":"4ef9a5ded33da533c86ffdf0fca999dc39c46c75ac441dfef9e39f5f205ef0ca"} Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.192833 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-854c7674ff-m6hwp"] Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.194983 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.198879 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.215299 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-854c7674ff-m6hwp"] Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.242554 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-openstack-edpm-ipam\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.242600 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-config\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.242621 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-dns-svc\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.242685 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-ovsdbserver-sb\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.242738 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zsh6\" (UniqueName: \"kubernetes.io/projected/b344930a-ea61-4f4a-8e1a-baf9abdacb58-kube-api-access-4zsh6\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.242770 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-ovsdbserver-nb\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.242787 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-dns-swift-storage-0\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.344556 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-openstack-edpm-ipam\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.344606 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-config\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.344622 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-dns-svc\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.344663 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-ovsdbserver-sb\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.344698 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zsh6\" (UniqueName: \"kubernetes.io/projected/b344930a-ea61-4f4a-8e1a-baf9abdacb58-kube-api-access-4zsh6\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.344721 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-ovsdbserver-nb\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.344737 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-dns-swift-storage-0\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.345617 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-dns-swift-storage-0\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.346191 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-ovsdbserver-sb\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.346435 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-dns-svc\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.346464 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-config\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.346960 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-openstack-edpm-ipam\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.347286 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-ovsdbserver-nb\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.362474 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zsh6\" (UniqueName: \"kubernetes.io/projected/b344930a-ea61-4f4a-8e1a-baf9abdacb58-kube-api-access-4zsh6\") pod \"dnsmasq-dns-854c7674ff-m6hwp\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.522796 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:04 crc kubenswrapper[4769]: I1006 07:36:04.991027 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-854c7674ff-m6hwp"] Oct 06 07:36:05 crc kubenswrapper[4769]: I1006 07:36:05.011599 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" event={"ID":"b344930a-ea61-4f4a-8e1a-baf9abdacb58","Type":"ContainerStarted","Data":"0b11aa85ea616ab33a4e30b97e18aaefbaa6c86d0d54dd97f52ba92fa7b3e910"} Oct 06 07:36:06 crc kubenswrapper[4769]: I1006 07:36:06.023511 4769 generic.go:334] "Generic (PLEG): container finished" podID="b344930a-ea61-4f4a-8e1a-baf9abdacb58" containerID="512984974fbe6cbb93d12ff11b72a77b6b1bc2b9a7dcb9d648b1085a867ae6f3" exitCode=0 Oct 06 07:36:06 crc kubenswrapper[4769]: I1006 07:36:06.023672 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" event={"ID":"b344930a-ea61-4f4a-8e1a-baf9abdacb58","Type":"ContainerDied","Data":"512984974fbe6cbb93d12ff11b72a77b6b1bc2b9a7dcb9d648b1085a867ae6f3"} Oct 06 07:36:07 crc kubenswrapper[4769]: I1006 07:36:07.041851 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" event={"ID":"b344930a-ea61-4f4a-8e1a-baf9abdacb58","Type":"ContainerStarted","Data":"fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588"} Oct 06 07:36:07 crc kubenswrapper[4769]: I1006 07:36:07.042273 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:07 crc kubenswrapper[4769]: I1006 07:36:07.078210 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" podStartSLOduration=3.078193033 podStartE2EDuration="3.078193033s" podCreationTimestamp="2025-10-06 07:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:36:07.073816743 +0000 UTC m=+1163.598097970" watchObservedRunningTime="2025-10-06 07:36:07.078193033 +0000 UTC m=+1163.602474190" Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.524233 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.626334 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7df55567c-2xgsr"] Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.626673 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" podUID="3546e562-3dc5-4b9f-821c-46c3d530a1c3" containerName="dnsmasq-dns" containerID="cri-o://2664ece92b0ae82a388545fd80520480a9905668f2be84a1c8fa4446e7e887e2" gracePeriod=10 Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.849534 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698c57b6fc-kpzq9"] Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.853124 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.881067 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698c57b6fc-kpzq9"] Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.981801 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-dns-svc\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.982047 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66td6\" (UniqueName: \"kubernetes.io/projected/17a31b12-440b-4bad-87f5-176edddf3ba4-kube-api-access-66td6\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.982112 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-dns-swift-storage-0\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.982247 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-config\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.982283 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-openstack-edpm-ipam\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.982363 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-ovsdbserver-nb\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:14 crc kubenswrapper[4769]: I1006 07:36:14.982528 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-ovsdbserver-sb\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.083924 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-config\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.084132 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-openstack-edpm-ipam\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.084298 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-ovsdbserver-nb\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.084520 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-ovsdbserver-sb\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.084656 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-dns-svc\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.084930 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66td6\" (UniqueName: \"kubernetes.io/projected/17a31b12-440b-4bad-87f5-176edddf3ba4-kube-api-access-66td6\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.085089 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-dns-swift-storage-0\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.086634 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-dns-swift-storage-0\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.087208 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-openstack-edpm-ipam\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.087265 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-ovsdbserver-nb\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.087515 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-dns-svc\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.087618 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-ovsdbserver-sb\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.087733 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17a31b12-440b-4bad-87f5-176edddf3ba4-config\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.152834 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66td6\" (UniqueName: \"kubernetes.io/projected/17a31b12-440b-4bad-87f5-176edddf3ba4-kube-api-access-66td6\") pod \"dnsmasq-dns-698c57b6fc-kpzq9\" (UID: \"17a31b12-440b-4bad-87f5-176edddf3ba4\") " pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.183867 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.192985 4769 generic.go:334] "Generic (PLEG): container finished" podID="3546e562-3dc5-4b9f-821c-46c3d530a1c3" containerID="2664ece92b0ae82a388545fd80520480a9905668f2be84a1c8fa4446e7e887e2" exitCode=0 Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.193023 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" event={"ID":"3546e562-3dc5-4b9f-821c-46c3d530a1c3","Type":"ContainerDied","Data":"2664ece92b0ae82a388545fd80520480a9905668f2be84a1c8fa4446e7e887e2"} Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.193048 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" event={"ID":"3546e562-3dc5-4b9f-821c-46c3d530a1c3","Type":"ContainerDied","Data":"1235760395ea9c09c32c5012675d28ad104942471b64dc7445018df104cfa38c"} Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.193057 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1235760395ea9c09c32c5012675d28ad104942471b64dc7445018df104cfa38c" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.210727 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.295011 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-dns-swift-storage-0\") pod \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.295068 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-config\") pod \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.295180 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-dns-svc\") pod \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.295313 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-ovsdbserver-nb\") pod \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.295358 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-ovsdbserver-sb\") pod \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.295402 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b76h7\" (UniqueName: \"kubernetes.io/projected/3546e562-3dc5-4b9f-821c-46c3d530a1c3-kube-api-access-b76h7\") pod \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\" (UID: \"3546e562-3dc5-4b9f-821c-46c3d530a1c3\") " Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.313494 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3546e562-3dc5-4b9f-821c-46c3d530a1c3-kube-api-access-b76h7" (OuterVolumeSpecName: "kube-api-access-b76h7") pod "3546e562-3dc5-4b9f-821c-46c3d530a1c3" (UID: "3546e562-3dc5-4b9f-821c-46c3d530a1c3"). InnerVolumeSpecName "kube-api-access-b76h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.371919 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3546e562-3dc5-4b9f-821c-46c3d530a1c3" (UID: "3546e562-3dc5-4b9f-821c-46c3d530a1c3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.374944 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3546e562-3dc5-4b9f-821c-46c3d530a1c3" (UID: "3546e562-3dc5-4b9f-821c-46c3d530a1c3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.386095 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3546e562-3dc5-4b9f-821c-46c3d530a1c3" (UID: "3546e562-3dc5-4b9f-821c-46c3d530a1c3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.392019 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3546e562-3dc5-4b9f-821c-46c3d530a1c3" (UID: "3546e562-3dc5-4b9f-821c-46c3d530a1c3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.397791 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-config" (OuterVolumeSpecName: "config") pod "3546e562-3dc5-4b9f-821c-46c3d530a1c3" (UID: "3546e562-3dc5-4b9f-821c-46c3d530a1c3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.409931 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.409961 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.409970 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.409980 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b76h7\" (UniqueName: \"kubernetes.io/projected/3546e562-3dc5-4b9f-821c-46c3d530a1c3-kube-api-access-b76h7\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.409990 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.409997 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3546e562-3dc5-4b9f-821c-46c3d530a1c3-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:15 crc kubenswrapper[4769]: I1006 07:36:15.726522 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698c57b6fc-kpzq9"] Oct 06 07:36:15 crc kubenswrapper[4769]: W1006 07:36:15.735972 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17a31b12_440b_4bad_87f5_176edddf3ba4.slice/crio-e9d47ab6550cba30b29cee3ba31bab12d34817df1acf5d5cc3b8370934306b01 WatchSource:0}: Error finding container e9d47ab6550cba30b29cee3ba31bab12d34817df1acf5d5cc3b8370934306b01: Status 404 returned error can't find the container with id e9d47ab6550cba30b29cee3ba31bab12d34817df1acf5d5cc3b8370934306b01 Oct 06 07:36:16 crc kubenswrapper[4769]: I1006 07:36:16.208220 4769 generic.go:334] "Generic (PLEG): container finished" podID="17a31b12-440b-4bad-87f5-176edddf3ba4" containerID="c4f8d80eb1d4e4eeb8a01e682c9048e1a8846387cd7e19b5f4326bc5f5cd5248" exitCode=0 Oct 06 07:36:16 crc kubenswrapper[4769]: I1006 07:36:16.208320 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" event={"ID":"17a31b12-440b-4bad-87f5-176edddf3ba4","Type":"ContainerDied","Data":"c4f8d80eb1d4e4eeb8a01e682c9048e1a8846387cd7e19b5f4326bc5f5cd5248"} Oct 06 07:36:16 crc kubenswrapper[4769]: I1006 07:36:16.208561 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df55567c-2xgsr" Oct 06 07:36:16 crc kubenswrapper[4769]: I1006 07:36:16.208611 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" event={"ID":"17a31b12-440b-4bad-87f5-176edddf3ba4","Type":"ContainerStarted","Data":"e9d47ab6550cba30b29cee3ba31bab12d34817df1acf5d5cc3b8370934306b01"} Oct 06 07:36:16 crc kubenswrapper[4769]: I1006 07:36:16.264571 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7df55567c-2xgsr"] Oct 06 07:36:16 crc kubenswrapper[4769]: I1006 07:36:16.275469 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7df55567c-2xgsr"] Oct 06 07:36:17 crc kubenswrapper[4769]: I1006 07:36:17.222212 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" event={"ID":"17a31b12-440b-4bad-87f5-176edddf3ba4","Type":"ContainerStarted","Data":"25e1f6eb045a7aefdfd90bebb3a8a1bb0008df27691a5a679bd32acbdcc2e015"} Oct 06 07:36:17 crc kubenswrapper[4769]: I1006 07:36:17.222613 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:17 crc kubenswrapper[4769]: I1006 07:36:17.249369 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" podStartSLOduration=3.249351754 podStartE2EDuration="3.249351754s" podCreationTimestamp="2025-10-06 07:36:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:36:17.245970201 +0000 UTC m=+1173.770251348" watchObservedRunningTime="2025-10-06 07:36:17.249351754 +0000 UTC m=+1173.773632911" Oct 06 07:36:18 crc kubenswrapper[4769]: I1006 07:36:18.184595 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3546e562-3dc5-4b9f-821c-46c3d530a1c3" path="/var/lib/kubelet/pods/3546e562-3dc5-4b9f-821c-46c3d530a1c3/volumes" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.185983 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698c57b6fc-kpzq9" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.245467 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-854c7674ff-m6hwp"] Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.245779 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" podUID="b344930a-ea61-4f4a-8e1a-baf9abdacb58" containerName="dnsmasq-dns" containerID="cri-o://fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588" gracePeriod=10 Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.713267 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.830761 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-ovsdbserver-sb\") pod \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.831130 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zsh6\" (UniqueName: \"kubernetes.io/projected/b344930a-ea61-4f4a-8e1a-baf9abdacb58-kube-api-access-4zsh6\") pod \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.831168 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-dns-swift-storage-0\") pod \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.831195 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-config\") pod \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.831270 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-openstack-edpm-ipam\") pod \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.831298 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-ovsdbserver-nb\") pod \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.831314 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-dns-svc\") pod \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\" (UID: \"b344930a-ea61-4f4a-8e1a-baf9abdacb58\") " Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.836753 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b344930a-ea61-4f4a-8e1a-baf9abdacb58-kube-api-access-4zsh6" (OuterVolumeSpecName: "kube-api-access-4zsh6") pod "b344930a-ea61-4f4a-8e1a-baf9abdacb58" (UID: "b344930a-ea61-4f4a-8e1a-baf9abdacb58"). InnerVolumeSpecName "kube-api-access-4zsh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.879703 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b344930a-ea61-4f4a-8e1a-baf9abdacb58" (UID: "b344930a-ea61-4f4a-8e1a-baf9abdacb58"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.886248 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b344930a-ea61-4f4a-8e1a-baf9abdacb58" (UID: "b344930a-ea61-4f4a-8e1a-baf9abdacb58"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.890592 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b344930a-ea61-4f4a-8e1a-baf9abdacb58" (UID: "b344930a-ea61-4f4a-8e1a-baf9abdacb58"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.894032 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "b344930a-ea61-4f4a-8e1a-baf9abdacb58" (UID: "b344930a-ea61-4f4a-8e1a-baf9abdacb58"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.896303 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-config" (OuterVolumeSpecName: "config") pod "b344930a-ea61-4f4a-8e1a-baf9abdacb58" (UID: "b344930a-ea61-4f4a-8e1a-baf9abdacb58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.897588 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b344930a-ea61-4f4a-8e1a-baf9abdacb58" (UID: "b344930a-ea61-4f4a-8e1a-baf9abdacb58"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.933452 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.933490 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zsh6\" (UniqueName: \"kubernetes.io/projected/b344930a-ea61-4f4a-8e1a-baf9abdacb58-kube-api-access-4zsh6\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.933504 4769 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.933519 4769 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-config\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.933531 4769 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.933542 4769 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:25 crc kubenswrapper[4769]: I1006 07:36:25.933553 4769 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b344930a-ea61-4f4a-8e1a-baf9abdacb58-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 06 07:36:26 crc kubenswrapper[4769]: I1006 07:36:26.330814 4769 generic.go:334] "Generic (PLEG): container finished" podID="b344930a-ea61-4f4a-8e1a-baf9abdacb58" containerID="fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588" exitCode=0 Oct 06 07:36:26 crc kubenswrapper[4769]: I1006 07:36:26.330871 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" event={"ID":"b344930a-ea61-4f4a-8e1a-baf9abdacb58","Type":"ContainerDied","Data":"fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588"} Oct 06 07:36:26 crc kubenswrapper[4769]: I1006 07:36:26.330919 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" event={"ID":"b344930a-ea61-4f4a-8e1a-baf9abdacb58","Type":"ContainerDied","Data":"0b11aa85ea616ab33a4e30b97e18aaefbaa6c86d0d54dd97f52ba92fa7b3e910"} Oct 06 07:36:26 crc kubenswrapper[4769]: I1006 07:36:26.330927 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854c7674ff-m6hwp" Oct 06 07:36:26 crc kubenswrapper[4769]: I1006 07:36:26.330942 4769 scope.go:117] "RemoveContainer" containerID="fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588" Oct 06 07:36:26 crc kubenswrapper[4769]: I1006 07:36:26.369863 4769 scope.go:117] "RemoveContainer" containerID="512984974fbe6cbb93d12ff11b72a77b6b1bc2b9a7dcb9d648b1085a867ae6f3" Oct 06 07:36:26 crc kubenswrapper[4769]: I1006 07:36:26.373525 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-854c7674ff-m6hwp"] Oct 06 07:36:26 crc kubenswrapper[4769]: I1006 07:36:26.388632 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-854c7674ff-m6hwp"] Oct 06 07:36:26 crc kubenswrapper[4769]: I1006 07:36:26.425984 4769 scope.go:117] "RemoveContainer" containerID="fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588" Oct 06 07:36:26 crc kubenswrapper[4769]: E1006 07:36:26.426476 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588\": container with ID starting with fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588 not found: ID does not exist" containerID="fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588" Oct 06 07:36:26 crc kubenswrapper[4769]: I1006 07:36:26.426518 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588"} err="failed to get container status \"fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588\": rpc error: code = NotFound desc = could not find container \"fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588\": container with ID starting with fe3a8030ca2946d916b44af42723c7f90a89e007351e468c851b96427e310588 not found: ID does not exist" Oct 06 07:36:26 crc kubenswrapper[4769]: I1006 07:36:26.426545 4769 scope.go:117] "RemoveContainer" containerID="512984974fbe6cbb93d12ff11b72a77b6b1bc2b9a7dcb9d648b1085a867ae6f3" Oct 06 07:36:26 crc kubenswrapper[4769]: E1006 07:36:26.426900 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"512984974fbe6cbb93d12ff11b72a77b6b1bc2b9a7dcb9d648b1085a867ae6f3\": container with ID starting with 512984974fbe6cbb93d12ff11b72a77b6b1bc2b9a7dcb9d648b1085a867ae6f3 not found: ID does not exist" containerID="512984974fbe6cbb93d12ff11b72a77b6b1bc2b9a7dcb9d648b1085a867ae6f3" Oct 06 07:36:26 crc kubenswrapper[4769]: I1006 07:36:26.426933 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"512984974fbe6cbb93d12ff11b72a77b6b1bc2b9a7dcb9d648b1085a867ae6f3"} err="failed to get container status \"512984974fbe6cbb93d12ff11b72a77b6b1bc2b9a7dcb9d648b1085a867ae6f3\": rpc error: code = NotFound desc = could not find container \"512984974fbe6cbb93d12ff11b72a77b6b1bc2b9a7dcb9d648b1085a867ae6f3\": container with ID starting with 512984974fbe6cbb93d12ff11b72a77b6b1bc2b9a7dcb9d648b1085a867ae6f3 not found: ID does not exist" Oct 06 07:36:28 crc kubenswrapper[4769]: I1006 07:36:28.189706 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b344930a-ea61-4f4a-8e1a-baf9abdacb58" path="/var/lib/kubelet/pods/b344930a-ea61-4f4a-8e1a-baf9abdacb58/volumes" Oct 06 07:36:30 crc kubenswrapper[4769]: I1006 07:36:30.386930 4769 generic.go:334] "Generic (PLEG): container finished" podID="e5db2de8-5580-43f3-aa10-3a1cc7806fba" containerID="15f06c082fe0916094b786cc3a5f683aa632d02b8fa055e616ece7a980b53bb0" exitCode=0 Oct 06 07:36:30 crc kubenswrapper[4769]: I1006 07:36:30.387024 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e5db2de8-5580-43f3-aa10-3a1cc7806fba","Type":"ContainerDied","Data":"15f06c082fe0916094b786cc3a5f683aa632d02b8fa055e616ece7a980b53bb0"} Oct 06 07:36:31 crc kubenswrapper[4769]: I1006 07:36:31.400697 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e5db2de8-5580-43f3-aa10-3a1cc7806fba","Type":"ContainerStarted","Data":"09e1246f8eead0751bf665cd3d5a1fea5e3e005bc052c82676af1f612b82c24b"} Oct 06 07:36:31 crc kubenswrapper[4769]: I1006 07:36:31.401738 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 06 07:36:31 crc kubenswrapper[4769]: I1006 07:36:31.432425 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.432397957 podStartE2EDuration="36.432397957s" podCreationTimestamp="2025-10-06 07:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:36:31.421158079 +0000 UTC m=+1187.945439276" watchObservedRunningTime="2025-10-06 07:36:31.432397957 +0000 UTC m=+1187.956679114" Oct 06 07:36:32 crc kubenswrapper[4769]: I1006 07:36:32.413406 4769 generic.go:334] "Generic (PLEG): container finished" podID="8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6" containerID="4ef9a5ded33da533c86ffdf0fca999dc39c46c75ac441dfef9e39f5f205ef0ca" exitCode=0 Oct 06 07:36:32 crc kubenswrapper[4769]: I1006 07:36:32.414032 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6","Type":"ContainerDied","Data":"4ef9a5ded33da533c86ffdf0fca999dc39c46c75ac441dfef9e39f5f205ef0ca"} Oct 06 07:36:33 crc kubenswrapper[4769]: I1006 07:36:33.426834 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6","Type":"ContainerStarted","Data":"f5f53d08e3e0fcc8b87d254c4e6e16a4b20a9c0419076ece98f294adeee3aa72"} Oct 06 07:36:33 crc kubenswrapper[4769]: I1006 07:36:33.427364 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:36:33 crc kubenswrapper[4769]: I1006 07:36:33.453804 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.45378797 podStartE2EDuration="37.45378797s" podCreationTimestamp="2025-10-06 07:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 07:36:33.449363189 +0000 UTC m=+1189.973644426" watchObservedRunningTime="2025-10-06 07:36:33.45378797 +0000 UTC m=+1189.978069117" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.512779 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw"] Oct 06 07:36:38 crc kubenswrapper[4769]: E1006 07:36:38.514323 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b344930a-ea61-4f4a-8e1a-baf9abdacb58" containerName="init" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.514349 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b344930a-ea61-4f4a-8e1a-baf9abdacb58" containerName="init" Oct 06 07:36:38 crc kubenswrapper[4769]: E1006 07:36:38.514378 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3546e562-3dc5-4b9f-821c-46c3d530a1c3" containerName="init" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.514391 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3546e562-3dc5-4b9f-821c-46c3d530a1c3" containerName="init" Oct 06 07:36:38 crc kubenswrapper[4769]: E1006 07:36:38.514428 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b344930a-ea61-4f4a-8e1a-baf9abdacb58" containerName="dnsmasq-dns" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.514476 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b344930a-ea61-4f4a-8e1a-baf9abdacb58" containerName="dnsmasq-dns" Oct 06 07:36:38 crc kubenswrapper[4769]: E1006 07:36:38.514505 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3546e562-3dc5-4b9f-821c-46c3d530a1c3" containerName="dnsmasq-dns" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.514516 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3546e562-3dc5-4b9f-821c-46c3d530a1c3" containerName="dnsmasq-dns" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.514805 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b344930a-ea61-4f4a-8e1a-baf9abdacb58" containerName="dnsmasq-dns" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.514834 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3546e562-3dc5-4b9f-821c-46c3d530a1c3" containerName="dnsmasq-dns" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.517216 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.522999 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6hsvg" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.523041 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.523055 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.523218 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.525049 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw"] Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.561748 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-498s6\" (UniqueName: \"kubernetes.io/projected/1be57ab5-e167-4a82-a668-0ac08f6d9a18-kube-api-access-498s6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.561844 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.561893 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.561981 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.663358 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.663447 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.663511 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.663550 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-498s6\" (UniqueName: \"kubernetes.io/projected/1be57ab5-e167-4a82-a668-0ac08f6d9a18-kube-api-access-498s6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.668452 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.668459 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.677766 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.684669 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-498s6\" (UniqueName: \"kubernetes.io/projected/1be57ab5-e167-4a82-a668-0ac08f6d9a18-kube-api-access-498s6\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:38 crc kubenswrapper[4769]: I1006 07:36:38.850986 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:36:39 crc kubenswrapper[4769]: I1006 07:36:39.528878 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 07:36:39 crc kubenswrapper[4769]: I1006 07:36:39.529126 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw"] Oct 06 07:36:40 crc kubenswrapper[4769]: I1006 07:36:40.506645 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" event={"ID":"1be57ab5-e167-4a82-a668-0ac08f6d9a18","Type":"ContainerStarted","Data":"8fb34599876133f825777844e9ef74427c8a6c6dde3f011a4053adfd5f6f4d3e"} Oct 06 07:36:46 crc kubenswrapper[4769]: I1006 07:36:46.309575 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 06 07:36:47 crc kubenswrapper[4769]: I1006 07:36:47.342593 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 06 07:36:48 crc kubenswrapper[4769]: I1006 07:36:48.914032 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 07:36:49 crc kubenswrapper[4769]: I1006 07:36:49.587010 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" event={"ID":"1be57ab5-e167-4a82-a668-0ac08f6d9a18","Type":"ContainerStarted","Data":"d1bb3c0e7ae280befc5e85d25152ccb161fae18169d1c48d17c9b91f60ee5300"} Oct 06 07:36:49 crc kubenswrapper[4769]: I1006 07:36:49.605145 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" podStartSLOduration=2.223097676 podStartE2EDuration="11.605132582s" podCreationTimestamp="2025-10-06 07:36:38 +0000 UTC" firstStartedPulling="2025-10-06 07:36:39.528633581 +0000 UTC m=+1196.052914738" lastFinishedPulling="2025-10-06 07:36:48.910668497 +0000 UTC m=+1205.434949644" observedRunningTime="2025-10-06 07:36:49.60394985 +0000 UTC m=+1206.128231037" watchObservedRunningTime="2025-10-06 07:36:49.605132582 +0000 UTC m=+1206.129413729" Oct 06 07:37:05 crc kubenswrapper[4769]: I1006 07:37:05.784897 4769 generic.go:334] "Generic (PLEG): container finished" podID="1be57ab5-e167-4a82-a668-0ac08f6d9a18" containerID="d1bb3c0e7ae280befc5e85d25152ccb161fae18169d1c48d17c9b91f60ee5300" exitCode=0 Oct 06 07:37:05 crc kubenswrapper[4769]: I1006 07:37:05.785007 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" event={"ID":"1be57ab5-e167-4a82-a668-0ac08f6d9a18","Type":"ContainerDied","Data":"d1bb3c0e7ae280befc5e85d25152ccb161fae18169d1c48d17c9b91f60ee5300"} Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.371063 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.484388 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-repo-setup-combined-ca-bundle\") pod \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.484577 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-498s6\" (UniqueName: \"kubernetes.io/projected/1be57ab5-e167-4a82-a668-0ac08f6d9a18-kube-api-access-498s6\") pod \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.484691 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-ssh-key\") pod \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.484755 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-inventory\") pod \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\" (UID: \"1be57ab5-e167-4a82-a668-0ac08f6d9a18\") " Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.504328 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1be57ab5-e167-4a82-a668-0ac08f6d9a18" (UID: "1be57ab5-e167-4a82-a668-0ac08f6d9a18"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.504511 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1be57ab5-e167-4a82-a668-0ac08f6d9a18-kube-api-access-498s6" (OuterVolumeSpecName: "kube-api-access-498s6") pod "1be57ab5-e167-4a82-a668-0ac08f6d9a18" (UID: "1be57ab5-e167-4a82-a668-0ac08f6d9a18"). InnerVolumeSpecName "kube-api-access-498s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.523051 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1be57ab5-e167-4a82-a668-0ac08f6d9a18" (UID: "1be57ab5-e167-4a82-a668-0ac08f6d9a18"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.528509 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-inventory" (OuterVolumeSpecName: "inventory") pod "1be57ab5-e167-4a82-a668-0ac08f6d9a18" (UID: "1be57ab5-e167-4a82-a668-0ac08f6d9a18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.587363 4769 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.587412 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-498s6\" (UniqueName: \"kubernetes.io/projected/1be57ab5-e167-4a82-a668-0ac08f6d9a18-kube-api-access-498s6\") on node \"crc\" DevicePath \"\"" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.587446 4769 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.587461 4769 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1be57ab5-e167-4a82-a668-0ac08f6d9a18-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.812545 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" event={"ID":"1be57ab5-e167-4a82-a668-0ac08f6d9a18","Type":"ContainerDied","Data":"8fb34599876133f825777844e9ef74427c8a6c6dde3f011a4053adfd5f6f4d3e"} Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.812595 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fb34599876133f825777844e9ef74427c8a6c6dde3f011a4053adfd5f6f4d3e" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.812668 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.909964 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6"] Oct 06 07:37:07 crc kubenswrapper[4769]: E1006 07:37:07.910343 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1be57ab5-e167-4a82-a668-0ac08f6d9a18" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.910364 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="1be57ab5-e167-4a82-a668-0ac08f6d9a18" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.910584 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="1be57ab5-e167-4a82-a668-0ac08f6d9a18" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.911218 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.913859 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.914197 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.917264 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.918135 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6hsvg" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.939599 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6"] Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.994709 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c44a1d31-d466-4c44-b8ea-088f2011e9b3-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gm4q6\" (UID: \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.995128 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44a1d31-d466-4c44-b8ea-088f2011e9b3-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gm4q6\" (UID: \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:07 crc kubenswrapper[4769]: I1006 07:37:07.995168 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcttc\" (UniqueName: \"kubernetes.io/projected/c44a1d31-d466-4c44-b8ea-088f2011e9b3-kube-api-access-kcttc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gm4q6\" (UID: \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:08 crc kubenswrapper[4769]: I1006 07:37:08.097004 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c44a1d31-d466-4c44-b8ea-088f2011e9b3-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gm4q6\" (UID: \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:08 crc kubenswrapper[4769]: I1006 07:37:08.097247 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44a1d31-d466-4c44-b8ea-088f2011e9b3-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gm4q6\" (UID: \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:08 crc kubenswrapper[4769]: I1006 07:37:08.097308 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcttc\" (UniqueName: \"kubernetes.io/projected/c44a1d31-d466-4c44-b8ea-088f2011e9b3-kube-api-access-kcttc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gm4q6\" (UID: \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:08 crc kubenswrapper[4769]: I1006 07:37:08.103938 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c44a1d31-d466-4c44-b8ea-088f2011e9b3-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gm4q6\" (UID: \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:08 crc kubenswrapper[4769]: I1006 07:37:08.107486 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44a1d31-d466-4c44-b8ea-088f2011e9b3-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gm4q6\" (UID: \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:08 crc kubenswrapper[4769]: I1006 07:37:08.118010 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcttc\" (UniqueName: \"kubernetes.io/projected/c44a1d31-d466-4c44-b8ea-088f2011e9b3-kube-api-access-kcttc\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-gm4q6\" (UID: \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:08 crc kubenswrapper[4769]: I1006 07:37:08.235454 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:08 crc kubenswrapper[4769]: I1006 07:37:08.626579 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6"] Oct 06 07:37:08 crc kubenswrapper[4769]: W1006 07:37:08.631700 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc44a1d31_d466_4c44_b8ea_088f2011e9b3.slice/crio-f879921009f994e61e025fe75ca5c90ff82f5384d259c7aa0033a49f09ab61aa WatchSource:0}: Error finding container f879921009f994e61e025fe75ca5c90ff82f5384d259c7aa0033a49f09ab61aa: Status 404 returned error can't find the container with id f879921009f994e61e025fe75ca5c90ff82f5384d259c7aa0033a49f09ab61aa Oct 06 07:37:08 crc kubenswrapper[4769]: I1006 07:37:08.824925 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" event={"ID":"c44a1d31-d466-4c44-b8ea-088f2011e9b3","Type":"ContainerStarted","Data":"f879921009f994e61e025fe75ca5c90ff82f5384d259c7aa0033a49f09ab61aa"} Oct 06 07:37:09 crc kubenswrapper[4769]: I1006 07:37:09.838195 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" event={"ID":"c44a1d31-d466-4c44-b8ea-088f2011e9b3","Type":"ContainerStarted","Data":"461d0b94d2e2da49d2e63e29f5d3e38761e5eeaaf232bbaf765b8b78e3b9673f"} Oct 06 07:37:09 crc kubenswrapper[4769]: I1006 07:37:09.866365 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" podStartSLOduration=2.300515295 podStartE2EDuration="2.866345203s" podCreationTimestamp="2025-10-06 07:37:07 +0000 UTC" firstStartedPulling="2025-10-06 07:37:08.635344951 +0000 UTC m=+1225.159626108" lastFinishedPulling="2025-10-06 07:37:09.201174859 +0000 UTC m=+1225.725456016" observedRunningTime="2025-10-06 07:37:09.866199669 +0000 UTC m=+1226.390480846" watchObservedRunningTime="2025-10-06 07:37:09.866345203 +0000 UTC m=+1226.390626360" Oct 06 07:37:12 crc kubenswrapper[4769]: I1006 07:37:12.867271 4769 generic.go:334] "Generic (PLEG): container finished" podID="c44a1d31-d466-4c44-b8ea-088f2011e9b3" containerID="461d0b94d2e2da49d2e63e29f5d3e38761e5eeaaf232bbaf765b8b78e3b9673f" exitCode=0 Oct 06 07:37:12 crc kubenswrapper[4769]: I1006 07:37:12.867394 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" event={"ID":"c44a1d31-d466-4c44-b8ea-088f2011e9b3","Type":"ContainerDied","Data":"461d0b94d2e2da49d2e63e29f5d3e38761e5eeaaf232bbaf765b8b78e3b9673f"} Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.311950 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.433014 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c44a1d31-d466-4c44-b8ea-088f2011e9b3-ssh-key\") pod \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\" (UID: \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\") " Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.433099 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcttc\" (UniqueName: \"kubernetes.io/projected/c44a1d31-d466-4c44-b8ea-088f2011e9b3-kube-api-access-kcttc\") pod \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\" (UID: \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\") " Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.433388 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44a1d31-d466-4c44-b8ea-088f2011e9b3-inventory\") pod \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\" (UID: \"c44a1d31-d466-4c44-b8ea-088f2011e9b3\") " Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.438640 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44a1d31-d466-4c44-b8ea-088f2011e9b3-kube-api-access-kcttc" (OuterVolumeSpecName: "kube-api-access-kcttc") pod "c44a1d31-d466-4c44-b8ea-088f2011e9b3" (UID: "c44a1d31-d466-4c44-b8ea-088f2011e9b3"). InnerVolumeSpecName "kube-api-access-kcttc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.465572 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44a1d31-d466-4c44-b8ea-088f2011e9b3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c44a1d31-d466-4c44-b8ea-088f2011e9b3" (UID: "c44a1d31-d466-4c44-b8ea-088f2011e9b3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.468892 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44a1d31-d466-4c44-b8ea-088f2011e9b3-inventory" (OuterVolumeSpecName: "inventory") pod "c44a1d31-d466-4c44-b8ea-088f2011e9b3" (UID: "c44a1d31-d466-4c44-b8ea-088f2011e9b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.535785 4769 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c44a1d31-d466-4c44-b8ea-088f2011e9b3-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.535946 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcttc\" (UniqueName: \"kubernetes.io/projected/c44a1d31-d466-4c44-b8ea-088f2011e9b3-kube-api-access-kcttc\") on node \"crc\" DevicePath \"\"" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.536026 4769 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c44a1d31-d466-4c44-b8ea-088f2011e9b3-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.910740 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" event={"ID":"c44a1d31-d466-4c44-b8ea-088f2011e9b3","Type":"ContainerDied","Data":"f879921009f994e61e025fe75ca5c90ff82f5384d259c7aa0033a49f09ab61aa"} Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.910785 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f879921009f994e61e025fe75ca5c90ff82f5384d259c7aa0033a49f09ab61aa" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.910862 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-gm4q6" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.994973 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7"] Oct 06 07:37:14 crc kubenswrapper[4769]: E1006 07:37:14.995381 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44a1d31-d466-4c44-b8ea-088f2011e9b3" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.995399 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44a1d31-d466-4c44-b8ea-088f2011e9b3" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.995616 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44a1d31-d466-4c44-b8ea-088f2011e9b3" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.996249 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.998801 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.998821 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.999099 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6hsvg" Oct 06 07:37:14 crc kubenswrapper[4769]: I1006 07:37:14.999162 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.005649 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7"] Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.148794 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.149413 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.149524 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.149786 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqcnf\" (UniqueName: \"kubernetes.io/projected/761139da-805f-4c7e-a9af-6dfd529df0d5-kube-api-access-nqcnf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.252501 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqcnf\" (UniqueName: \"kubernetes.io/projected/761139da-805f-4c7e-a9af-6dfd529df0d5-kube-api-access-nqcnf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.252741 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.252898 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.253036 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.257155 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.257746 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.257897 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.276996 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqcnf\" (UniqueName: \"kubernetes.io/projected/761139da-805f-4c7e-a9af-6dfd529df0d5-kube-api-access-nqcnf\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.315924 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.648686 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7"] Oct 06 07:37:15 crc kubenswrapper[4769]: I1006 07:37:15.925881 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" event={"ID":"761139da-805f-4c7e-a9af-6dfd529df0d5","Type":"ContainerStarted","Data":"113c0a229bc1d3b0348f331f256951f76b3d3957f8affa7851103d03a9f4080c"} Oct 06 07:37:16 crc kubenswrapper[4769]: I1006 07:37:16.941874 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" event={"ID":"761139da-805f-4c7e-a9af-6dfd529df0d5","Type":"ContainerStarted","Data":"f330ff5fc54deed54fe13da5cf918a31a0d7d2f9e3af24f2af3fdaf7c558b4cc"} Oct 06 07:37:16 crc kubenswrapper[4769]: I1006 07:37:16.977208 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" podStartSLOduration=2.5284379169999998 podStartE2EDuration="2.977181902s" podCreationTimestamp="2025-10-06 07:37:14 +0000 UTC" firstStartedPulling="2025-10-06 07:37:15.656196408 +0000 UTC m=+1232.180477565" lastFinishedPulling="2025-10-06 07:37:16.104940393 +0000 UTC m=+1232.629221550" observedRunningTime="2025-10-06 07:37:16.973569313 +0000 UTC m=+1233.497850490" watchObservedRunningTime="2025-10-06 07:37:16.977181902 +0000 UTC m=+1233.501463089" Oct 06 07:37:52 crc kubenswrapper[4769]: I1006 07:37:52.245502 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:37:52 crc kubenswrapper[4769]: I1006 07:37:52.246060 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:37:54 crc kubenswrapper[4769]: I1006 07:37:54.189208 4769 scope.go:117] "RemoveContainer" containerID="8d0ba270cffa0f3b1f1e3d59d9e9dc1627c2b61cebced653728e1d11a9b0bfe2" Oct 06 07:37:54 crc kubenswrapper[4769]: I1006 07:37:54.228349 4769 scope.go:117] "RemoveContainer" containerID="7adebf2664adf0bbe8bb25a3dffaf53061421f7b25c2542fe629dba849544e38" Oct 06 07:38:22 crc kubenswrapper[4769]: I1006 07:38:22.245788 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:38:22 crc kubenswrapper[4769]: I1006 07:38:22.246392 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:38:34 crc kubenswrapper[4769]: I1006 07:38:34.826963 4769 generic.go:334] "Generic (PLEG): container finished" podID="761139da-805f-4c7e-a9af-6dfd529df0d5" containerID="f330ff5fc54deed54fe13da5cf918a31a0d7d2f9e3af24f2af3fdaf7c558b4cc" exitCode=2 Oct 06 07:38:34 crc kubenswrapper[4769]: I1006 07:38:34.827073 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" event={"ID":"761139da-805f-4c7e-a9af-6dfd529df0d5","Type":"ContainerDied","Data":"f330ff5fc54deed54fe13da5cf918a31a0d7d2f9e3af24f2af3fdaf7c558b4cc"} Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.285869 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.323054 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-inventory\") pod \"761139da-805f-4c7e-a9af-6dfd529df0d5\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.323096 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-ssh-key\") pod \"761139da-805f-4c7e-a9af-6dfd529df0d5\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.323198 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqcnf\" (UniqueName: \"kubernetes.io/projected/761139da-805f-4c7e-a9af-6dfd529df0d5-kube-api-access-nqcnf\") pod \"761139da-805f-4c7e-a9af-6dfd529df0d5\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.323222 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-bootstrap-combined-ca-bundle\") pod \"761139da-805f-4c7e-a9af-6dfd529df0d5\" (UID: \"761139da-805f-4c7e-a9af-6dfd529df0d5\") " Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.328601 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761139da-805f-4c7e-a9af-6dfd529df0d5-kube-api-access-nqcnf" (OuterVolumeSpecName: "kube-api-access-nqcnf") pod "761139da-805f-4c7e-a9af-6dfd529df0d5" (UID: "761139da-805f-4c7e-a9af-6dfd529df0d5"). InnerVolumeSpecName "kube-api-access-nqcnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.328762 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "761139da-805f-4c7e-a9af-6dfd529df0d5" (UID: "761139da-805f-4c7e-a9af-6dfd529df0d5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.363456 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "761139da-805f-4c7e-a9af-6dfd529df0d5" (UID: "761139da-805f-4c7e-a9af-6dfd529df0d5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.366336 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-inventory" (OuterVolumeSpecName: "inventory") pod "761139da-805f-4c7e-a9af-6dfd529df0d5" (UID: "761139da-805f-4c7e-a9af-6dfd529df0d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.425370 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqcnf\" (UniqueName: \"kubernetes.io/projected/761139da-805f-4c7e-a9af-6dfd529df0d5-kube-api-access-nqcnf\") on node \"crc\" DevicePath \"\"" Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.425409 4769 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.425437 4769 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.425450 4769 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/761139da-805f-4c7e-a9af-6dfd529df0d5-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.846601 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" event={"ID":"761139da-805f-4c7e-a9af-6dfd529df0d5","Type":"ContainerDied","Data":"113c0a229bc1d3b0348f331f256951f76b3d3957f8affa7851103d03a9f4080c"} Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.847093 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="113c0a229bc1d3b0348f331f256951f76b3d3957f8affa7851103d03a9f4080c" Oct 06 07:38:36 crc kubenswrapper[4769]: I1006 07:38:36.846644 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.033581 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4"] Oct 06 07:38:44 crc kubenswrapper[4769]: E1006 07:38:44.034619 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761139da-805f-4c7e-a9af-6dfd529df0d5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.034640 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="761139da-805f-4c7e-a9af-6dfd529df0d5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.034887 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="761139da-805f-4c7e-a9af-6dfd529df0d5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.035744 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.038049 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.038523 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6hsvg" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.039732 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.040176 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.057539 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4"] Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.167615 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvs8t\" (UniqueName: \"kubernetes.io/projected/4df5bcab-0094-4bf9-bb2d-8e1376e55260-kube-api-access-fvs8t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.167905 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.168146 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.168317 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.270510 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.270570 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.270663 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvs8t\" (UniqueName: \"kubernetes.io/projected/4df5bcab-0094-4bf9-bb2d-8e1376e55260-kube-api-access-fvs8t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.270693 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.273871 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.274075 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.277089 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.294264 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.295704 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.296225 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvs8t\" (UniqueName: \"kubernetes.io/projected/4df5bcab-0094-4bf9-bb2d-8e1376e55260-kube-api-access-fvs8t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.409496 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6hsvg" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.418392 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:38:44 crc kubenswrapper[4769]: I1006 07:38:44.957480 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4"] Oct 06 07:38:45 crc kubenswrapper[4769]: I1006 07:38:45.445262 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 07:38:45 crc kubenswrapper[4769]: I1006 07:38:45.946659 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" event={"ID":"4df5bcab-0094-4bf9-bb2d-8e1376e55260","Type":"ContainerStarted","Data":"54408af1a9da0bf718f4e7530e29f8074f5ccde4cf02010f3713612900069252"} Oct 06 07:38:45 crc kubenswrapper[4769]: I1006 07:38:45.946993 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" event={"ID":"4df5bcab-0094-4bf9-bb2d-8e1376e55260","Type":"ContainerStarted","Data":"c2893ee53a365b887a26038483e0fa2ff1c6596a6f0df830e9c075c041f8c659"} Oct 06 07:38:45 crc kubenswrapper[4769]: I1006 07:38:45.977910 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" podStartSLOduration=1.508988046 podStartE2EDuration="1.977881638s" podCreationTimestamp="2025-10-06 07:38:44 +0000 UTC" firstStartedPulling="2025-10-06 07:38:44.973412159 +0000 UTC m=+1321.497693316" lastFinishedPulling="2025-10-06 07:38:45.442305761 +0000 UTC m=+1321.966586908" observedRunningTime="2025-10-06 07:38:45.970032504 +0000 UTC m=+1322.494313651" watchObservedRunningTime="2025-10-06 07:38:45.977881638 +0000 UTC m=+1322.502162825" Oct 06 07:38:52 crc kubenswrapper[4769]: I1006 07:38:52.245110 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:38:52 crc kubenswrapper[4769]: I1006 07:38:52.245799 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:38:52 crc kubenswrapper[4769]: I1006 07:38:52.245867 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:38:52 crc kubenswrapper[4769]: I1006 07:38:52.246730 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a10ffbcd338af3006a90bff9475390df2aa79af988aec76bc98f6ad864fb7a59"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 07:38:52 crc kubenswrapper[4769]: I1006 07:38:52.246827 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://a10ffbcd338af3006a90bff9475390df2aa79af988aec76bc98f6ad864fb7a59" gracePeriod=600 Oct 06 07:38:53 crc kubenswrapper[4769]: I1006 07:38:53.039467 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="a10ffbcd338af3006a90bff9475390df2aa79af988aec76bc98f6ad864fb7a59" exitCode=0 Oct 06 07:38:53 crc kubenswrapper[4769]: I1006 07:38:53.039556 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"a10ffbcd338af3006a90bff9475390df2aa79af988aec76bc98f6ad864fb7a59"} Oct 06 07:38:53 crc kubenswrapper[4769]: I1006 07:38:53.040047 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf"} Oct 06 07:38:53 crc kubenswrapper[4769]: I1006 07:38:53.040085 4769 scope.go:117] "RemoveContainer" containerID="47403349fa0900a44adea3739e07d33b0b59adbbdb84ed4f63521b0ae42276d3" Oct 06 07:38:54 crc kubenswrapper[4769]: I1006 07:38:54.341282 4769 scope.go:117] "RemoveContainer" containerID="4c81c0b1f8f70b17392b85053686fc53a02b844b623856fd48bd917147373d34" Oct 06 07:38:54 crc kubenswrapper[4769]: I1006 07:38:54.374539 4769 scope.go:117] "RemoveContainer" containerID="3db2ec824a7658983cb2407531c3fb79ec06373971f4a53b76fb5c3574c717ec" Oct 06 07:38:54 crc kubenswrapper[4769]: I1006 07:38:54.402447 4769 scope.go:117] "RemoveContainer" containerID="826e9885432bee17e529c9ee89002c393bf1ed59cb34d35521f463e10d5ef2ab" Oct 06 07:38:54 crc kubenswrapper[4769]: I1006 07:38:54.438766 4769 scope.go:117] "RemoveContainer" containerID="b1c69624f8a28e7bcbbc068ea3d942e3a4e9dd2240ed49463facc7ad480bb2c1" Oct 06 07:38:54 crc kubenswrapper[4769]: I1006 07:38:54.477759 4769 scope.go:117] "RemoveContainer" containerID="c598899dab4197ff1d8a4013dade77ab3f21d1890b6b25fb7e44c9f2858f4460" Oct 06 07:38:54 crc kubenswrapper[4769]: I1006 07:38:54.542572 4769 scope.go:117] "RemoveContainer" containerID="635e31c0a7303277dad2663ebcc13d02f7d7ccf35b3d231fb2f29c3f04f8fe30" Oct 06 07:38:54 crc kubenswrapper[4769]: I1006 07:38:54.567412 4769 scope.go:117] "RemoveContainer" containerID="17c0f4ca2d17a3a7d2eb8079fa66398b280f5b54f45045da1a861c223a917018" Oct 06 07:39:21 crc kubenswrapper[4769]: I1006 07:39:21.321157 4769 generic.go:334] "Generic (PLEG): container finished" podID="4df5bcab-0094-4bf9-bb2d-8e1376e55260" containerID="54408af1a9da0bf718f4e7530e29f8074f5ccde4cf02010f3713612900069252" exitCode=2 Oct 06 07:39:21 crc kubenswrapper[4769]: I1006 07:39:21.321239 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" event={"ID":"4df5bcab-0094-4bf9-bb2d-8e1376e55260","Type":"ContainerDied","Data":"54408af1a9da0bf718f4e7530e29f8074f5ccde4cf02010f3713612900069252"} Oct 06 07:39:22 crc kubenswrapper[4769]: I1006 07:39:22.834142 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:39:22 crc kubenswrapper[4769]: I1006 07:39:22.978277 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-ssh-key\") pod \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " Oct 06 07:39:22 crc kubenswrapper[4769]: I1006 07:39:22.978358 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-bootstrap-combined-ca-bundle\") pod \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " Oct 06 07:39:22 crc kubenswrapper[4769]: I1006 07:39:22.978478 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-inventory\") pod \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " Oct 06 07:39:22 crc kubenswrapper[4769]: I1006 07:39:22.978621 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvs8t\" (UniqueName: \"kubernetes.io/projected/4df5bcab-0094-4bf9-bb2d-8e1376e55260-kube-api-access-fvs8t\") pod \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\" (UID: \"4df5bcab-0094-4bf9-bb2d-8e1376e55260\") " Oct 06 07:39:22 crc kubenswrapper[4769]: I1006 07:39:22.993920 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df5bcab-0094-4bf9-bb2d-8e1376e55260-kube-api-access-fvs8t" (OuterVolumeSpecName: "kube-api-access-fvs8t") pod "4df5bcab-0094-4bf9-bb2d-8e1376e55260" (UID: "4df5bcab-0094-4bf9-bb2d-8e1376e55260"). InnerVolumeSpecName "kube-api-access-fvs8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:39:22 crc kubenswrapper[4769]: I1006 07:39:22.998604 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4df5bcab-0094-4bf9-bb2d-8e1376e55260" (UID: "4df5bcab-0094-4bf9-bb2d-8e1376e55260"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:39:23 crc kubenswrapper[4769]: I1006 07:39:23.010712 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-inventory" (OuterVolumeSpecName: "inventory") pod "4df5bcab-0094-4bf9-bb2d-8e1376e55260" (UID: "4df5bcab-0094-4bf9-bb2d-8e1376e55260"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:39:23 crc kubenswrapper[4769]: I1006 07:39:23.015648 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4df5bcab-0094-4bf9-bb2d-8e1376e55260" (UID: "4df5bcab-0094-4bf9-bb2d-8e1376e55260"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:39:23 crc kubenswrapper[4769]: I1006 07:39:23.081692 4769 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 07:39:23 crc kubenswrapper[4769]: I1006 07:39:23.081755 4769 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:39:23 crc kubenswrapper[4769]: I1006 07:39:23.081788 4769 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4df5bcab-0094-4bf9-bb2d-8e1376e55260-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 07:39:23 crc kubenswrapper[4769]: I1006 07:39:23.081815 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvs8t\" (UniqueName: \"kubernetes.io/projected/4df5bcab-0094-4bf9-bb2d-8e1376e55260-kube-api-access-fvs8t\") on node \"crc\" DevicePath \"\"" Oct 06 07:39:23 crc kubenswrapper[4769]: I1006 07:39:23.342502 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" event={"ID":"4df5bcab-0094-4bf9-bb2d-8e1376e55260","Type":"ContainerDied","Data":"c2893ee53a365b887a26038483e0fa2ff1c6596a6f0df830e9c075c041f8c659"} Oct 06 07:39:23 crc kubenswrapper[4769]: I1006 07:39:23.342877 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2893ee53a365b887a26038483e0fa2ff1c6596a6f0df830e9c075c041f8c659" Oct 06 07:39:23 crc kubenswrapper[4769]: I1006 07:39:23.342602 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.053793 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj"] Oct 06 07:39:40 crc kubenswrapper[4769]: E1006 07:39:40.055087 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4df5bcab-0094-4bf9-bb2d-8e1376e55260" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.055112 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4df5bcab-0094-4bf9-bb2d-8e1376e55260" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.055529 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4df5bcab-0094-4bf9-bb2d-8e1376e55260" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.056791 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.060745 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.060821 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.061238 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6hsvg" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.061295 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.085753 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj"] Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.215212 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.215322 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.215411 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smq6q\" (UniqueName: \"kubernetes.io/projected/99cfdb3d-0fd9-47a4-b6af-70f78b733696-kube-api-access-smq6q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.215620 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.317370 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.317467 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.317511 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smq6q\" (UniqueName: \"kubernetes.io/projected/99cfdb3d-0fd9-47a4-b6af-70f78b733696-kube-api-access-smq6q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.317654 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.332574 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.333179 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.334345 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smq6q\" (UniqueName: \"kubernetes.io/projected/99cfdb3d-0fd9-47a4-b6af-70f78b733696-kube-api-access-smq6q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.334934 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.386821 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:39:40 crc kubenswrapper[4769]: I1006 07:39:40.939366 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj"] Oct 06 07:39:41 crc kubenswrapper[4769]: I1006 07:39:41.584783 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" event={"ID":"99cfdb3d-0fd9-47a4-b6af-70f78b733696","Type":"ContainerStarted","Data":"e06a729b09bffe1246a11af3e02acf37cc6a2d4b01f488b0352382628fcb7e32"} Oct 06 07:39:42 crc kubenswrapper[4769]: I1006 07:39:42.600240 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" event={"ID":"99cfdb3d-0fd9-47a4-b6af-70f78b733696","Type":"ContainerStarted","Data":"bd731417309e99bf277d26d632a6a998fcce48461db612598e74a79804a25b9a"} Oct 06 07:39:42 crc kubenswrapper[4769]: I1006 07:39:42.629245 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" podStartSLOduration=2.1464298250000002 podStartE2EDuration="2.629179217s" podCreationTimestamp="2025-10-06 07:39:40 +0000 UTC" firstStartedPulling="2025-10-06 07:39:40.942727039 +0000 UTC m=+1377.467008186" lastFinishedPulling="2025-10-06 07:39:41.425476431 +0000 UTC m=+1377.949757578" observedRunningTime="2025-10-06 07:39:42.625412115 +0000 UTC m=+1379.149693312" watchObservedRunningTime="2025-10-06 07:39:42.629179217 +0000 UTC m=+1379.153460404" Oct 06 07:40:22 crc kubenswrapper[4769]: I1006 07:40:22.048292 4769 generic.go:334] "Generic (PLEG): container finished" podID="99cfdb3d-0fd9-47a4-b6af-70f78b733696" containerID="bd731417309e99bf277d26d632a6a998fcce48461db612598e74a79804a25b9a" exitCode=2 Oct 06 07:40:22 crc kubenswrapper[4769]: I1006 07:40:22.048374 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" event={"ID":"99cfdb3d-0fd9-47a4-b6af-70f78b733696","Type":"ContainerDied","Data":"bd731417309e99bf277d26d632a6a998fcce48461db612598e74a79804a25b9a"} Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.481174 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.553407 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-inventory\") pod \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.553913 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smq6q\" (UniqueName: \"kubernetes.io/projected/99cfdb3d-0fd9-47a4-b6af-70f78b733696-kube-api-access-smq6q\") pod \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.553943 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-ssh-key\") pod \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.554021 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-bootstrap-combined-ca-bundle\") pod \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\" (UID: \"99cfdb3d-0fd9-47a4-b6af-70f78b733696\") " Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.559521 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99cfdb3d-0fd9-47a4-b6af-70f78b733696-kube-api-access-smq6q" (OuterVolumeSpecName: "kube-api-access-smq6q") pod "99cfdb3d-0fd9-47a4-b6af-70f78b733696" (UID: "99cfdb3d-0fd9-47a4-b6af-70f78b733696"). InnerVolumeSpecName "kube-api-access-smq6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.559655 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "99cfdb3d-0fd9-47a4-b6af-70f78b733696" (UID: "99cfdb3d-0fd9-47a4-b6af-70f78b733696"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.591688 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "99cfdb3d-0fd9-47a4-b6af-70f78b733696" (UID: "99cfdb3d-0fd9-47a4-b6af-70f78b733696"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.593275 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-inventory" (OuterVolumeSpecName: "inventory") pod "99cfdb3d-0fd9-47a4-b6af-70f78b733696" (UID: "99cfdb3d-0fd9-47a4-b6af-70f78b733696"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.656304 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smq6q\" (UniqueName: \"kubernetes.io/projected/99cfdb3d-0fd9-47a4-b6af-70f78b733696-kube-api-access-smq6q\") on node \"crc\" DevicePath \"\"" Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.656349 4769 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.656363 4769 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:40:23 crc kubenswrapper[4769]: I1006 07:40:23.656375 4769 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99cfdb3d-0fd9-47a4-b6af-70f78b733696-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 07:40:24 crc kubenswrapper[4769]: I1006 07:40:24.075113 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" event={"ID":"99cfdb3d-0fd9-47a4-b6af-70f78b733696","Type":"ContainerDied","Data":"e06a729b09bffe1246a11af3e02acf37cc6a2d4b01f488b0352382628fcb7e32"} Oct 06 07:40:24 crc kubenswrapper[4769]: I1006 07:40:24.075411 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e06a729b09bffe1246a11af3e02acf37cc6a2d4b01f488b0352382628fcb7e32" Oct 06 07:40:24 crc kubenswrapper[4769]: I1006 07:40:24.075227 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj" Oct 06 07:40:52 crc kubenswrapper[4769]: I1006 07:40:52.245476 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:40:52 crc kubenswrapper[4769]: I1006 07:40:52.245861 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.185935 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zxltv"] Oct 06 07:40:54 crc kubenswrapper[4769]: E1006 07:40:54.186689 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99cfdb3d-0fd9-47a4-b6af-70f78b733696" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.186708 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="99cfdb3d-0fd9-47a4-b6af-70f78b733696" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.186985 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="99cfdb3d-0fd9-47a4-b6af-70f78b733696" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.188701 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.215333 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zxltv"] Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.286253 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda43d75-4725-494d-aef1-3dd3b1639559-utilities\") pod \"certified-operators-zxltv\" (UID: \"fda43d75-4725-494d-aef1-3dd3b1639559\") " pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.286346 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzkj9\" (UniqueName: \"kubernetes.io/projected/fda43d75-4725-494d-aef1-3dd3b1639559-kube-api-access-zzkj9\") pod \"certified-operators-zxltv\" (UID: \"fda43d75-4725-494d-aef1-3dd3b1639559\") " pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.286397 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda43d75-4725-494d-aef1-3dd3b1639559-catalog-content\") pod \"certified-operators-zxltv\" (UID: \"fda43d75-4725-494d-aef1-3dd3b1639559\") " pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.387626 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzkj9\" (UniqueName: \"kubernetes.io/projected/fda43d75-4725-494d-aef1-3dd3b1639559-kube-api-access-zzkj9\") pod \"certified-operators-zxltv\" (UID: \"fda43d75-4725-494d-aef1-3dd3b1639559\") " pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.387689 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda43d75-4725-494d-aef1-3dd3b1639559-catalog-content\") pod \"certified-operators-zxltv\" (UID: \"fda43d75-4725-494d-aef1-3dd3b1639559\") " pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.387755 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda43d75-4725-494d-aef1-3dd3b1639559-utilities\") pod \"certified-operators-zxltv\" (UID: \"fda43d75-4725-494d-aef1-3dd3b1639559\") " pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.388165 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda43d75-4725-494d-aef1-3dd3b1639559-utilities\") pod \"certified-operators-zxltv\" (UID: \"fda43d75-4725-494d-aef1-3dd3b1639559\") " pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.388665 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda43d75-4725-494d-aef1-3dd3b1639559-catalog-content\") pod \"certified-operators-zxltv\" (UID: \"fda43d75-4725-494d-aef1-3dd3b1639559\") " pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.424556 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzkj9\" (UniqueName: \"kubernetes.io/projected/fda43d75-4725-494d-aef1-3dd3b1639559-kube-api-access-zzkj9\") pod \"certified-operators-zxltv\" (UID: \"fda43d75-4725-494d-aef1-3dd3b1639559\") " pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.508940 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:40:54 crc kubenswrapper[4769]: I1006 07:40:54.687154 4769 scope.go:117] "RemoveContainer" containerID="0989cc76c76aa106c9a5b6be82e0f7d7d48c0769f5f7fbc891c7ed9eb81f4d7b" Oct 06 07:40:55 crc kubenswrapper[4769]: I1006 07:40:55.010984 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zxltv"] Oct 06 07:40:55 crc kubenswrapper[4769]: I1006 07:40:55.417150 4769 generic.go:334] "Generic (PLEG): container finished" podID="fda43d75-4725-494d-aef1-3dd3b1639559" containerID="31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c" exitCode=0 Oct 06 07:40:55 crc kubenswrapper[4769]: I1006 07:40:55.417400 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxltv" event={"ID":"fda43d75-4725-494d-aef1-3dd3b1639559","Type":"ContainerDied","Data":"31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c"} Oct 06 07:40:55 crc kubenswrapper[4769]: I1006 07:40:55.417439 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxltv" event={"ID":"fda43d75-4725-494d-aef1-3dd3b1639559","Type":"ContainerStarted","Data":"ef94cbf29419867a2a0248377bdd08568ca0af71bd85c9c87d0b02558dfc9322"} Oct 06 07:40:55 crc kubenswrapper[4769]: E1006 07:40:55.417741 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfda43d75_4725_494d_aef1_3dd3b1639559.slice/crio-31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfda43d75_4725_494d_aef1_3dd3b1639559.slice/crio-conmon-31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c.scope\": RecentStats: unable to find data in memory cache]" Oct 06 07:40:56 crc kubenswrapper[4769]: I1006 07:40:56.432577 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxltv" event={"ID":"fda43d75-4725-494d-aef1-3dd3b1639559","Type":"ContainerStarted","Data":"24d94cad93ba284e6d97831b9b108efc46a67a52cb99b8ad37515fa6d29d91ad"} Oct 06 07:40:57 crc kubenswrapper[4769]: I1006 07:40:57.449485 4769 generic.go:334] "Generic (PLEG): container finished" podID="fda43d75-4725-494d-aef1-3dd3b1639559" containerID="24d94cad93ba284e6d97831b9b108efc46a67a52cb99b8ad37515fa6d29d91ad" exitCode=0 Oct 06 07:40:57 crc kubenswrapper[4769]: I1006 07:40:57.449571 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxltv" event={"ID":"fda43d75-4725-494d-aef1-3dd3b1639559","Type":"ContainerDied","Data":"24d94cad93ba284e6d97831b9b108efc46a67a52cb99b8ad37515fa6d29d91ad"} Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.183457 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lzrst"] Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.187250 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.194939 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzrst"] Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.269590 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-utilities\") pod \"redhat-marketplace-lzrst\" (UID: \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\") " pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.270190 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-catalog-content\") pod \"redhat-marketplace-lzrst\" (UID: \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\") " pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.270276 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlnks\" (UniqueName: \"kubernetes.io/projected/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-kube-api-access-vlnks\") pod \"redhat-marketplace-lzrst\" (UID: \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\") " pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.372230 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-catalog-content\") pod \"redhat-marketplace-lzrst\" (UID: \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\") " pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.372616 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlnks\" (UniqueName: \"kubernetes.io/projected/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-kube-api-access-vlnks\") pod \"redhat-marketplace-lzrst\" (UID: \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\") " pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.372707 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-utilities\") pod \"redhat-marketplace-lzrst\" (UID: \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\") " pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.373176 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-utilities\") pod \"redhat-marketplace-lzrst\" (UID: \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\") " pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.373239 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-catalog-content\") pod \"redhat-marketplace-lzrst\" (UID: \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\") " pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.406182 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlnks\" (UniqueName: \"kubernetes.io/projected/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-kube-api-access-vlnks\") pod \"redhat-marketplace-lzrst\" (UID: \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\") " pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.460580 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxltv" event={"ID":"fda43d75-4725-494d-aef1-3dd3b1639559","Type":"ContainerStarted","Data":"11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e"} Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.491990 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zxltv" podStartSLOduration=1.855643724 podStartE2EDuration="4.491968943s" podCreationTimestamp="2025-10-06 07:40:54 +0000 UTC" firstStartedPulling="2025-10-06 07:40:55.419026452 +0000 UTC m=+1451.943307599" lastFinishedPulling="2025-10-06 07:40:58.055351641 +0000 UTC m=+1454.579632818" observedRunningTime="2025-10-06 07:40:58.480107069 +0000 UTC m=+1455.004388236" watchObservedRunningTime="2025-10-06 07:40:58.491968943 +0000 UTC m=+1455.016250100" Oct 06 07:40:58 crc kubenswrapper[4769]: I1006 07:40:58.542207 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:40:59 crc kubenswrapper[4769]: I1006 07:40:59.032269 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzrst"] Oct 06 07:40:59 crc kubenswrapper[4769]: I1006 07:40:59.470303 4769 generic.go:334] "Generic (PLEG): container finished" podID="6b1e6db4-7579-4d2d-9a48-a9f40ed95650" containerID="d394cc2d6fab79db4e7dc112865f3efee17c07f68af1d30621407e45ca6add6e" exitCode=0 Oct 06 07:40:59 crc kubenswrapper[4769]: I1006 07:40:59.470364 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzrst" event={"ID":"6b1e6db4-7579-4d2d-9a48-a9f40ed95650","Type":"ContainerDied","Data":"d394cc2d6fab79db4e7dc112865f3efee17c07f68af1d30621407e45ca6add6e"} Oct 06 07:40:59 crc kubenswrapper[4769]: I1006 07:40:59.470615 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzrst" event={"ID":"6b1e6db4-7579-4d2d-9a48-a9f40ed95650","Type":"ContainerStarted","Data":"82e9d4aaa8667b8ff0ec07d5b91fc88049dee895178c271538a680365131982e"} Oct 06 07:41:00 crc kubenswrapper[4769]: I1006 07:41:00.484662 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzrst" event={"ID":"6b1e6db4-7579-4d2d-9a48-a9f40ed95650","Type":"ContainerStarted","Data":"a26423dd49ed3997a75eab50f849fbf325402f54095a9efc488d0f991f2ff94e"} Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.032029 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d"] Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.033897 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.041272 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6hsvg" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.041495 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.041664 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.041709 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.053395 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d"] Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.148360 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.148460 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.148520 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-252v7\" (UniqueName: \"kubernetes.io/projected/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-kube-api-access-252v7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.149238 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.251693 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.251825 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.251897 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-252v7\" (UniqueName: \"kubernetes.io/projected/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-kube-api-access-252v7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.252080 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.258318 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.259073 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.264735 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.277513 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-252v7\" (UniqueName: \"kubernetes.io/projected/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-kube-api-access-252v7\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.428746 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.508691 4769 generic.go:334] "Generic (PLEG): container finished" podID="6b1e6db4-7579-4d2d-9a48-a9f40ed95650" containerID="a26423dd49ed3997a75eab50f849fbf325402f54095a9efc488d0f991f2ff94e" exitCode=0 Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.508729 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzrst" event={"ID":"6b1e6db4-7579-4d2d-9a48-a9f40ed95650","Type":"ContainerDied","Data":"a26423dd49ed3997a75eab50f849fbf325402f54095a9efc488d0f991f2ff94e"} Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.508755 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzrst" event={"ID":"6b1e6db4-7579-4d2d-9a48-a9f40ed95650","Type":"ContainerStarted","Data":"9c40612a0f89ef8be62cb9c4b42394c04954dcba588f74234236b73d2298147b"} Oct 06 07:41:01 crc kubenswrapper[4769]: I1006 07:41:01.533924 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lzrst" podStartSLOduration=2.032441441 podStartE2EDuration="3.533908188s" podCreationTimestamp="2025-10-06 07:40:58 +0000 UTC" firstStartedPulling="2025-10-06 07:40:59.472760336 +0000 UTC m=+1455.997041493" lastFinishedPulling="2025-10-06 07:41:00.974227083 +0000 UTC m=+1457.498508240" observedRunningTime="2025-10-06 07:41:01.531245336 +0000 UTC m=+1458.055526503" watchObservedRunningTime="2025-10-06 07:41:01.533908188 +0000 UTC m=+1458.058189335" Oct 06 07:41:02 crc kubenswrapper[4769]: I1006 07:41:02.040058 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d"] Oct 06 07:41:02 crc kubenswrapper[4769]: W1006 07:41:02.043727 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3e88895_8ab2_4b5c_a4ea_60cdeb335a77.slice/crio-16d804463e42b22ab8afbca0341e285236c11b14619517bcaad5894c0ff84eb8 WatchSource:0}: Error finding container 16d804463e42b22ab8afbca0341e285236c11b14619517bcaad5894c0ff84eb8: Status 404 returned error can't find the container with id 16d804463e42b22ab8afbca0341e285236c11b14619517bcaad5894c0ff84eb8 Oct 06 07:41:02 crc kubenswrapper[4769]: I1006 07:41:02.524888 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" event={"ID":"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77","Type":"ContainerStarted","Data":"16d804463e42b22ab8afbca0341e285236c11b14619517bcaad5894c0ff84eb8"} Oct 06 07:41:03 crc kubenswrapper[4769]: I1006 07:41:03.533519 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" event={"ID":"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77","Type":"ContainerStarted","Data":"e322e86ec263ccc73e5c229c355fc04f30e578d292941aa7a1a409613b7e3067"} Oct 06 07:41:03 crc kubenswrapper[4769]: I1006 07:41:03.559494 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" podStartSLOduration=1.962765715 podStartE2EDuration="2.55947549s" podCreationTimestamp="2025-10-06 07:41:01 +0000 UTC" firstStartedPulling="2025-10-06 07:41:02.046883921 +0000 UTC m=+1458.571165078" lastFinishedPulling="2025-10-06 07:41:02.643593716 +0000 UTC m=+1459.167874853" observedRunningTime="2025-10-06 07:41:03.555213734 +0000 UTC m=+1460.079494881" watchObservedRunningTime="2025-10-06 07:41:03.55947549 +0000 UTC m=+1460.083756637" Oct 06 07:41:04 crc kubenswrapper[4769]: I1006 07:41:04.509309 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:41:04 crc kubenswrapper[4769]: I1006 07:41:04.509629 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:41:04 crc kubenswrapper[4769]: I1006 07:41:04.575351 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:41:04 crc kubenswrapper[4769]: I1006 07:41:04.624053 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:41:04 crc kubenswrapper[4769]: I1006 07:41:04.818898 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zxltv"] Oct 06 07:41:06 crc kubenswrapper[4769]: I1006 07:41:06.563381 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zxltv" podUID="fda43d75-4725-494d-aef1-3dd3b1639559" containerName="registry-server" containerID="cri-o://11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e" gracePeriod=2 Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.144936 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.220186 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6g45p"] Oct 06 07:41:07 crc kubenswrapper[4769]: E1006 07:41:07.220570 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda43d75-4725-494d-aef1-3dd3b1639559" containerName="extract-content" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.220585 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda43d75-4725-494d-aef1-3dd3b1639559" containerName="extract-content" Oct 06 07:41:07 crc kubenswrapper[4769]: E1006 07:41:07.220602 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda43d75-4725-494d-aef1-3dd3b1639559" containerName="extract-utilities" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.220610 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda43d75-4725-494d-aef1-3dd3b1639559" containerName="extract-utilities" Oct 06 07:41:07 crc kubenswrapper[4769]: E1006 07:41:07.220617 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda43d75-4725-494d-aef1-3dd3b1639559" containerName="registry-server" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.220624 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda43d75-4725-494d-aef1-3dd3b1639559" containerName="registry-server" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.220835 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda43d75-4725-494d-aef1-3dd3b1639559" containerName="registry-server" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.222134 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.238015 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6g45p"] Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.267898 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda43d75-4725-494d-aef1-3dd3b1639559-catalog-content\") pod \"fda43d75-4725-494d-aef1-3dd3b1639559\" (UID: \"fda43d75-4725-494d-aef1-3dd3b1639559\") " Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.268185 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzkj9\" (UniqueName: \"kubernetes.io/projected/fda43d75-4725-494d-aef1-3dd3b1639559-kube-api-access-zzkj9\") pod \"fda43d75-4725-494d-aef1-3dd3b1639559\" (UID: \"fda43d75-4725-494d-aef1-3dd3b1639559\") " Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.268315 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda43d75-4725-494d-aef1-3dd3b1639559-utilities\") pod \"fda43d75-4725-494d-aef1-3dd3b1639559\" (UID: \"fda43d75-4725-494d-aef1-3dd3b1639559\") " Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.268730 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzq8b\" (UniqueName: \"kubernetes.io/projected/417f0a4c-56d1-4afe-8ed5-76dac56c748a-kube-api-access-fzq8b\") pod \"redhat-operators-6g45p\" (UID: \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\") " pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.268853 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda43d75-4725-494d-aef1-3dd3b1639559-utilities" (OuterVolumeSpecName: "utilities") pod "fda43d75-4725-494d-aef1-3dd3b1639559" (UID: "fda43d75-4725-494d-aef1-3dd3b1639559"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.269080 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417f0a4c-56d1-4afe-8ed5-76dac56c748a-utilities\") pod \"redhat-operators-6g45p\" (UID: \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\") " pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.269216 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417f0a4c-56d1-4afe-8ed5-76dac56c748a-catalog-content\") pod \"redhat-operators-6g45p\" (UID: \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\") " pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.269352 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda43d75-4725-494d-aef1-3dd3b1639559-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.273614 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda43d75-4725-494d-aef1-3dd3b1639559-kube-api-access-zzkj9" (OuterVolumeSpecName: "kube-api-access-zzkj9") pod "fda43d75-4725-494d-aef1-3dd3b1639559" (UID: "fda43d75-4725-494d-aef1-3dd3b1639559"). InnerVolumeSpecName "kube-api-access-zzkj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.307653 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda43d75-4725-494d-aef1-3dd3b1639559-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fda43d75-4725-494d-aef1-3dd3b1639559" (UID: "fda43d75-4725-494d-aef1-3dd3b1639559"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.370068 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417f0a4c-56d1-4afe-8ed5-76dac56c748a-catalog-content\") pod \"redhat-operators-6g45p\" (UID: \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\") " pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.370297 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzq8b\" (UniqueName: \"kubernetes.io/projected/417f0a4c-56d1-4afe-8ed5-76dac56c748a-kube-api-access-fzq8b\") pod \"redhat-operators-6g45p\" (UID: \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\") " pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.370550 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417f0a4c-56d1-4afe-8ed5-76dac56c748a-utilities\") pod \"redhat-operators-6g45p\" (UID: \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\") " pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.370704 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda43d75-4725-494d-aef1-3dd3b1639559-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.370801 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzkj9\" (UniqueName: \"kubernetes.io/projected/fda43d75-4725-494d-aef1-3dd3b1639559-kube-api-access-zzkj9\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.371252 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417f0a4c-56d1-4afe-8ed5-76dac56c748a-utilities\") pod \"redhat-operators-6g45p\" (UID: \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\") " pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.371466 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417f0a4c-56d1-4afe-8ed5-76dac56c748a-catalog-content\") pod \"redhat-operators-6g45p\" (UID: \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\") " pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.403310 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzq8b\" (UniqueName: \"kubernetes.io/projected/417f0a4c-56d1-4afe-8ed5-76dac56c748a-kube-api-access-fzq8b\") pod \"redhat-operators-6g45p\" (UID: \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\") " pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.540488 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.577496 4769 generic.go:334] "Generic (PLEG): container finished" podID="fda43d75-4725-494d-aef1-3dd3b1639559" containerID="11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e" exitCode=0 Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.577547 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxltv" event={"ID":"fda43d75-4725-494d-aef1-3dd3b1639559","Type":"ContainerDied","Data":"11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e"} Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.577577 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zxltv" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.577608 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zxltv" event={"ID":"fda43d75-4725-494d-aef1-3dd3b1639559","Type":"ContainerDied","Data":"ef94cbf29419867a2a0248377bdd08568ca0af71bd85c9c87d0b02558dfc9322"} Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.577628 4769 scope.go:117] "RemoveContainer" containerID="11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.607739 4769 scope.go:117] "RemoveContainer" containerID="24d94cad93ba284e6d97831b9b108efc46a67a52cb99b8ad37515fa6d29d91ad" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.623719 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zxltv"] Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.633159 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zxltv"] Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.661223 4769 scope.go:117] "RemoveContainer" containerID="31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.690820 4769 scope.go:117] "RemoveContainer" containerID="11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e" Oct 06 07:41:07 crc kubenswrapper[4769]: E1006 07:41:07.692187 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e\": container with ID starting with 11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e not found: ID does not exist" containerID="11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.692219 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e"} err="failed to get container status \"11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e\": rpc error: code = NotFound desc = could not find container \"11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e\": container with ID starting with 11d40780028213c457c4bd6f28fed38f5eaea1a7779de07afdf9b206357f587e not found: ID does not exist" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.692242 4769 scope.go:117] "RemoveContainer" containerID="24d94cad93ba284e6d97831b9b108efc46a67a52cb99b8ad37515fa6d29d91ad" Oct 06 07:41:07 crc kubenswrapper[4769]: E1006 07:41:07.692411 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24d94cad93ba284e6d97831b9b108efc46a67a52cb99b8ad37515fa6d29d91ad\": container with ID starting with 24d94cad93ba284e6d97831b9b108efc46a67a52cb99b8ad37515fa6d29d91ad not found: ID does not exist" containerID="24d94cad93ba284e6d97831b9b108efc46a67a52cb99b8ad37515fa6d29d91ad" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.692447 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24d94cad93ba284e6d97831b9b108efc46a67a52cb99b8ad37515fa6d29d91ad"} err="failed to get container status \"24d94cad93ba284e6d97831b9b108efc46a67a52cb99b8ad37515fa6d29d91ad\": rpc error: code = NotFound desc = could not find container \"24d94cad93ba284e6d97831b9b108efc46a67a52cb99b8ad37515fa6d29d91ad\": container with ID starting with 24d94cad93ba284e6d97831b9b108efc46a67a52cb99b8ad37515fa6d29d91ad not found: ID does not exist" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.692461 4769 scope.go:117] "RemoveContainer" containerID="31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c" Oct 06 07:41:07 crc kubenswrapper[4769]: E1006 07:41:07.692706 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c\": container with ID starting with 31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c not found: ID does not exist" containerID="31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.692728 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c"} err="failed to get container status \"31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c\": rpc error: code = NotFound desc = could not find container \"31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c\": container with ID starting with 31d9637b1342d001af95a5a4e3a83c6eb729416dba0a2f7bb8915a4e1c34c15c not found: ID does not exist" Oct 06 07:41:07 crc kubenswrapper[4769]: I1006 07:41:07.992584 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6g45p"] Oct 06 07:41:08 crc kubenswrapper[4769]: I1006 07:41:08.178332 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda43d75-4725-494d-aef1-3dd3b1639559" path="/var/lib/kubelet/pods/fda43d75-4725-494d-aef1-3dd3b1639559/volumes" Oct 06 07:41:08 crc kubenswrapper[4769]: I1006 07:41:08.542622 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:41:08 crc kubenswrapper[4769]: I1006 07:41:08.542978 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:41:08 crc kubenswrapper[4769]: I1006 07:41:08.590351 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:41:08 crc kubenswrapper[4769]: I1006 07:41:08.591073 4769 generic.go:334] "Generic (PLEG): container finished" podID="417f0a4c-56d1-4afe-8ed5-76dac56c748a" containerID="cabdf6007ef6968b4eaf7de901795ee1ae6783340ce4e5b86d6c9f05af36f7ef" exitCode=0 Oct 06 07:41:08 crc kubenswrapper[4769]: I1006 07:41:08.591165 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g45p" event={"ID":"417f0a4c-56d1-4afe-8ed5-76dac56c748a","Type":"ContainerDied","Data":"cabdf6007ef6968b4eaf7de901795ee1ae6783340ce4e5b86d6c9f05af36f7ef"} Oct 06 07:41:08 crc kubenswrapper[4769]: I1006 07:41:08.591203 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g45p" event={"ID":"417f0a4c-56d1-4afe-8ed5-76dac56c748a","Type":"ContainerStarted","Data":"73aae542cdaf3251d9d2beb9e158fa1958948b23bee3d52facbbfa27f36097e8"} Oct 06 07:41:08 crc kubenswrapper[4769]: I1006 07:41:08.643923 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:41:09 crc kubenswrapper[4769]: I1006 07:41:09.602764 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g45p" event={"ID":"417f0a4c-56d1-4afe-8ed5-76dac56c748a","Type":"ContainerStarted","Data":"cdd3cb0b3f546a6729cad882254e5143b7126b3a4b6c2b6c373ce2de04b56836"} Oct 06 07:41:10 crc kubenswrapper[4769]: I1006 07:41:10.614086 4769 generic.go:334] "Generic (PLEG): container finished" podID="417f0a4c-56d1-4afe-8ed5-76dac56c748a" containerID="cdd3cb0b3f546a6729cad882254e5143b7126b3a4b6c2b6c373ce2de04b56836" exitCode=0 Oct 06 07:41:10 crc kubenswrapper[4769]: I1006 07:41:10.614172 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g45p" event={"ID":"417f0a4c-56d1-4afe-8ed5-76dac56c748a","Type":"ContainerDied","Data":"cdd3cb0b3f546a6729cad882254e5143b7126b3a4b6c2b6c373ce2de04b56836"} Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.413331 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzrst"] Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.413878 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lzrst" podUID="6b1e6db4-7579-4d2d-9a48-a9f40ed95650" containerName="registry-server" containerID="cri-o://9c40612a0f89ef8be62cb9c4b42394c04954dcba588f74234236b73d2298147b" gracePeriod=2 Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.634704 4769 generic.go:334] "Generic (PLEG): container finished" podID="6b1e6db4-7579-4d2d-9a48-a9f40ed95650" containerID="9c40612a0f89ef8be62cb9c4b42394c04954dcba588f74234236b73d2298147b" exitCode=0 Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.634788 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzrst" event={"ID":"6b1e6db4-7579-4d2d-9a48-a9f40ed95650","Type":"ContainerDied","Data":"9c40612a0f89ef8be62cb9c4b42394c04954dcba588f74234236b73d2298147b"} Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.636898 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g45p" event={"ID":"417f0a4c-56d1-4afe-8ed5-76dac56c748a","Type":"ContainerStarted","Data":"18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7"} Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.654043 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6g45p" podStartSLOduration=2.127710075 podStartE2EDuration="4.654024296s" podCreationTimestamp="2025-10-06 07:41:07 +0000 UTC" firstStartedPulling="2025-10-06 07:41:08.595052097 +0000 UTC m=+1465.119333244" lastFinishedPulling="2025-10-06 07:41:11.121366318 +0000 UTC m=+1467.645647465" observedRunningTime="2025-10-06 07:41:11.650867471 +0000 UTC m=+1468.175148638" watchObservedRunningTime="2025-10-06 07:41:11.654024296 +0000 UTC m=+1468.178305443" Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.870732 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.949261 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-utilities\") pod \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\" (UID: \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\") " Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.949514 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-catalog-content\") pod \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\" (UID: \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\") " Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.949575 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlnks\" (UniqueName: \"kubernetes.io/projected/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-kube-api-access-vlnks\") pod \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\" (UID: \"6b1e6db4-7579-4d2d-9a48-a9f40ed95650\") " Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.949868 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-utilities" (OuterVolumeSpecName: "utilities") pod "6b1e6db4-7579-4d2d-9a48-a9f40ed95650" (UID: "6b1e6db4-7579-4d2d-9a48-a9f40ed95650"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.950183 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.969885 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b1e6db4-7579-4d2d-9a48-a9f40ed95650" (UID: "6b1e6db4-7579-4d2d-9a48-a9f40ed95650"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:41:11 crc kubenswrapper[4769]: I1006 07:41:11.972126 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-kube-api-access-vlnks" (OuterVolumeSpecName: "kube-api-access-vlnks") pod "6b1e6db4-7579-4d2d-9a48-a9f40ed95650" (UID: "6b1e6db4-7579-4d2d-9a48-a9f40ed95650"). InnerVolumeSpecName "kube-api-access-vlnks". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:41:12 crc kubenswrapper[4769]: I1006 07:41:12.052394 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:12 crc kubenswrapper[4769]: I1006 07:41:12.052466 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlnks\" (UniqueName: \"kubernetes.io/projected/6b1e6db4-7579-4d2d-9a48-a9f40ed95650-kube-api-access-vlnks\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:12 crc kubenswrapper[4769]: I1006 07:41:12.649742 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lzrst" event={"ID":"6b1e6db4-7579-4d2d-9a48-a9f40ed95650","Type":"ContainerDied","Data":"82e9d4aaa8667b8ff0ec07d5b91fc88049dee895178c271538a680365131982e"} Oct 06 07:41:12 crc kubenswrapper[4769]: I1006 07:41:12.649773 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lzrst" Oct 06 07:41:12 crc kubenswrapper[4769]: I1006 07:41:12.650229 4769 scope.go:117] "RemoveContainer" containerID="9c40612a0f89ef8be62cb9c4b42394c04954dcba588f74234236b73d2298147b" Oct 06 07:41:12 crc kubenswrapper[4769]: I1006 07:41:12.696099 4769 scope.go:117] "RemoveContainer" containerID="a26423dd49ed3997a75eab50f849fbf325402f54095a9efc488d0f991f2ff94e" Oct 06 07:41:12 crc kubenswrapper[4769]: I1006 07:41:12.701353 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzrst"] Oct 06 07:41:12 crc kubenswrapper[4769]: I1006 07:41:12.714575 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lzrst"] Oct 06 07:41:12 crc kubenswrapper[4769]: I1006 07:41:12.749244 4769 scope.go:117] "RemoveContainer" containerID="d394cc2d6fab79db4e7dc112865f3efee17c07f68af1d30621407e45ca6add6e" Oct 06 07:41:14 crc kubenswrapper[4769]: I1006 07:41:14.040905 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nxmsq"] Oct 06 07:41:14 crc kubenswrapper[4769]: I1006 07:41:14.048177 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nxmsq"] Oct 06 07:41:14 crc kubenswrapper[4769]: I1006 07:41:14.179582 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3f9e58-995c-4420-94bb-5e9672b469b7" path="/var/lib/kubelet/pods/1e3f9e58-995c-4420-94bb-5e9672b469b7/volumes" Oct 06 07:41:14 crc kubenswrapper[4769]: I1006 07:41:14.181366 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b1e6db4-7579-4d2d-9a48-a9f40ed95650" path="/var/lib/kubelet/pods/6b1e6db4-7579-4d2d-9a48-a9f40ed95650/volumes" Oct 06 07:41:15 crc kubenswrapper[4769]: I1006 07:41:15.033860 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-prfnk"] Oct 06 07:41:15 crc kubenswrapper[4769]: I1006 07:41:15.050375 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-dnxtk"] Oct 06 07:41:15 crc kubenswrapper[4769]: I1006 07:41:15.057815 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-dnxtk"] Oct 06 07:41:15 crc kubenswrapper[4769]: I1006 07:41:15.064807 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-prfnk"] Oct 06 07:41:16 crc kubenswrapper[4769]: I1006 07:41:16.185507 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f13035c-17a8-4de2-b9c6-31e517b47675" path="/var/lib/kubelet/pods/8f13035c-17a8-4de2-b9c6-31e517b47675/volumes" Oct 06 07:41:16 crc kubenswrapper[4769]: I1006 07:41:16.187100 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="924017e0-5e67-41ee-a60a-48d5c71da1cc" path="/var/lib/kubelet/pods/924017e0-5e67-41ee-a60a-48d5c71da1cc/volumes" Oct 06 07:41:17 crc kubenswrapper[4769]: I1006 07:41:17.540924 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:17 crc kubenswrapper[4769]: I1006 07:41:17.541230 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:17 crc kubenswrapper[4769]: I1006 07:41:17.597245 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:17 crc kubenswrapper[4769]: I1006 07:41:17.740255 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:17 crc kubenswrapper[4769]: I1006 07:41:17.831246 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6g45p"] Oct 06 07:41:19 crc kubenswrapper[4769]: I1006 07:41:19.709774 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6g45p" podUID="417f0a4c-56d1-4afe-8ed5-76dac56c748a" containerName="registry-server" containerID="cri-o://18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7" gracePeriod=2 Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.106085 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.204803 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzq8b\" (UniqueName: \"kubernetes.io/projected/417f0a4c-56d1-4afe-8ed5-76dac56c748a-kube-api-access-fzq8b\") pod \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\" (UID: \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\") " Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.204895 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417f0a4c-56d1-4afe-8ed5-76dac56c748a-utilities\") pod \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\" (UID: \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\") " Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.205005 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417f0a4c-56d1-4afe-8ed5-76dac56c748a-catalog-content\") pod \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\" (UID: \"417f0a4c-56d1-4afe-8ed5-76dac56c748a\") " Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.205547 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417f0a4c-56d1-4afe-8ed5-76dac56c748a-utilities" (OuterVolumeSpecName: "utilities") pod "417f0a4c-56d1-4afe-8ed5-76dac56c748a" (UID: "417f0a4c-56d1-4afe-8ed5-76dac56c748a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.205743 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417f0a4c-56d1-4afe-8ed5-76dac56c748a-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.212299 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/417f0a4c-56d1-4afe-8ed5-76dac56c748a-kube-api-access-fzq8b" (OuterVolumeSpecName: "kube-api-access-fzq8b") pod "417f0a4c-56d1-4afe-8ed5-76dac56c748a" (UID: "417f0a4c-56d1-4afe-8ed5-76dac56c748a"). InnerVolumeSpecName "kube-api-access-fzq8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.292273 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417f0a4c-56d1-4afe-8ed5-76dac56c748a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "417f0a4c-56d1-4afe-8ed5-76dac56c748a" (UID: "417f0a4c-56d1-4afe-8ed5-76dac56c748a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.309323 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzq8b\" (UniqueName: \"kubernetes.io/projected/417f0a4c-56d1-4afe-8ed5-76dac56c748a-kube-api-access-fzq8b\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.309352 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417f0a4c-56d1-4afe-8ed5-76dac56c748a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.722064 4769 generic.go:334] "Generic (PLEG): container finished" podID="417f0a4c-56d1-4afe-8ed5-76dac56c748a" containerID="18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7" exitCode=0 Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.722100 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g45p" event={"ID":"417f0a4c-56d1-4afe-8ed5-76dac56c748a","Type":"ContainerDied","Data":"18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7"} Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.722125 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g45p" event={"ID":"417f0a4c-56d1-4afe-8ed5-76dac56c748a","Type":"ContainerDied","Data":"73aae542cdaf3251d9d2beb9e158fa1958948b23bee3d52facbbfa27f36097e8"} Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.722141 4769 scope.go:117] "RemoveContainer" containerID="18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.722240 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6g45p" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.762674 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6g45p"] Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.769218 4769 scope.go:117] "RemoveContainer" containerID="cdd3cb0b3f546a6729cad882254e5143b7126b3a4b6c2b6c373ce2de04b56836" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.771179 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6g45p"] Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.806414 4769 scope.go:117] "RemoveContainer" containerID="cabdf6007ef6968b4eaf7de901795ee1ae6783340ce4e5b86d6c9f05af36f7ef" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.851415 4769 scope.go:117] "RemoveContainer" containerID="18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7" Oct 06 07:41:20 crc kubenswrapper[4769]: E1006 07:41:20.851939 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7\": container with ID starting with 18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7 not found: ID does not exist" containerID="18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.851976 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7"} err="failed to get container status \"18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7\": rpc error: code = NotFound desc = could not find container \"18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7\": container with ID starting with 18d4752ff7702e0d697c612f46974f61b7f2154797ac65ef0a874a0c38a519f7 not found: ID does not exist" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.851997 4769 scope.go:117] "RemoveContainer" containerID="cdd3cb0b3f546a6729cad882254e5143b7126b3a4b6c2b6c373ce2de04b56836" Oct 06 07:41:20 crc kubenswrapper[4769]: E1006 07:41:20.852308 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdd3cb0b3f546a6729cad882254e5143b7126b3a4b6c2b6c373ce2de04b56836\": container with ID starting with cdd3cb0b3f546a6729cad882254e5143b7126b3a4b6c2b6c373ce2de04b56836 not found: ID does not exist" containerID="cdd3cb0b3f546a6729cad882254e5143b7126b3a4b6c2b6c373ce2de04b56836" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.852335 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdd3cb0b3f546a6729cad882254e5143b7126b3a4b6c2b6c373ce2de04b56836"} err="failed to get container status \"cdd3cb0b3f546a6729cad882254e5143b7126b3a4b6c2b6c373ce2de04b56836\": rpc error: code = NotFound desc = could not find container \"cdd3cb0b3f546a6729cad882254e5143b7126b3a4b6c2b6c373ce2de04b56836\": container with ID starting with cdd3cb0b3f546a6729cad882254e5143b7126b3a4b6c2b6c373ce2de04b56836 not found: ID does not exist" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.852352 4769 scope.go:117] "RemoveContainer" containerID="cabdf6007ef6968b4eaf7de901795ee1ae6783340ce4e5b86d6c9f05af36f7ef" Oct 06 07:41:20 crc kubenswrapper[4769]: E1006 07:41:20.852600 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cabdf6007ef6968b4eaf7de901795ee1ae6783340ce4e5b86d6c9f05af36f7ef\": container with ID starting with cabdf6007ef6968b4eaf7de901795ee1ae6783340ce4e5b86d6c9f05af36f7ef not found: ID does not exist" containerID="cabdf6007ef6968b4eaf7de901795ee1ae6783340ce4e5b86d6c9f05af36f7ef" Oct 06 07:41:20 crc kubenswrapper[4769]: I1006 07:41:20.852624 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cabdf6007ef6968b4eaf7de901795ee1ae6783340ce4e5b86d6c9f05af36f7ef"} err="failed to get container status \"cabdf6007ef6968b4eaf7de901795ee1ae6783340ce4e5b86d6c9f05af36f7ef\": rpc error: code = NotFound desc = could not find container \"cabdf6007ef6968b4eaf7de901795ee1ae6783340ce4e5b86d6c9f05af36f7ef\": container with ID starting with cabdf6007ef6968b4eaf7de901795ee1ae6783340ce4e5b86d6c9f05af36f7ef not found: ID does not exist" Oct 06 07:41:22 crc kubenswrapper[4769]: I1006 07:41:22.184726 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="417f0a4c-56d1-4afe-8ed5-76dac56c748a" path="/var/lib/kubelet/pods/417f0a4c-56d1-4afe-8ed5-76dac56c748a/volumes" Oct 06 07:41:22 crc kubenswrapper[4769]: I1006 07:41:22.245519 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:41:22 crc kubenswrapper[4769]: I1006 07:41:22.245589 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:41:24 crc kubenswrapper[4769]: I1006 07:41:24.029577 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-284d-account-create-5hzmk"] Oct 06 07:41:24 crc kubenswrapper[4769]: I1006 07:41:24.039657 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-284d-account-create-5hzmk"] Oct 06 07:41:24 crc kubenswrapper[4769]: I1006 07:41:24.178165 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c65e686-3f39-4052-97ef-a7bf26d32ffb" path="/var/lib/kubelet/pods/1c65e686-3f39-4052-97ef-a7bf26d32ffb/volumes" Oct 06 07:41:29 crc kubenswrapper[4769]: I1006 07:41:29.033800 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-93a1-account-create-s99j6"] Oct 06 07:41:29 crc kubenswrapper[4769]: I1006 07:41:29.043192 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-93a1-account-create-s99j6"] Oct 06 07:41:30 crc kubenswrapper[4769]: I1006 07:41:30.190703 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="104ca122-82e6-441e-ae08-76ab2b828724" path="/var/lib/kubelet/pods/104ca122-82e6-441e-ae08-76ab2b828724/volumes" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.476108 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5c7kx"] Oct 06 07:41:31 crc kubenswrapper[4769]: E1006 07:41:31.476600 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1e6db4-7579-4d2d-9a48-a9f40ed95650" containerName="registry-server" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.476616 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1e6db4-7579-4d2d-9a48-a9f40ed95650" containerName="registry-server" Oct 06 07:41:31 crc kubenswrapper[4769]: E1006 07:41:31.476634 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417f0a4c-56d1-4afe-8ed5-76dac56c748a" containerName="extract-content" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.476643 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="417f0a4c-56d1-4afe-8ed5-76dac56c748a" containerName="extract-content" Oct 06 07:41:31 crc kubenswrapper[4769]: E1006 07:41:31.476661 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1e6db4-7579-4d2d-9a48-a9f40ed95650" containerName="extract-content" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.476670 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1e6db4-7579-4d2d-9a48-a9f40ed95650" containerName="extract-content" Oct 06 07:41:31 crc kubenswrapper[4769]: E1006 07:41:31.476683 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417f0a4c-56d1-4afe-8ed5-76dac56c748a" containerName="registry-server" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.476692 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="417f0a4c-56d1-4afe-8ed5-76dac56c748a" containerName="registry-server" Oct 06 07:41:31 crc kubenswrapper[4769]: E1006 07:41:31.476708 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b1e6db4-7579-4d2d-9a48-a9f40ed95650" containerName="extract-utilities" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.476716 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b1e6db4-7579-4d2d-9a48-a9f40ed95650" containerName="extract-utilities" Oct 06 07:41:31 crc kubenswrapper[4769]: E1006 07:41:31.476733 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417f0a4c-56d1-4afe-8ed5-76dac56c748a" containerName="extract-utilities" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.476742 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="417f0a4c-56d1-4afe-8ed5-76dac56c748a" containerName="extract-utilities" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.477033 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b1e6db4-7579-4d2d-9a48-a9f40ed95650" containerName="registry-server" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.477066 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="417f0a4c-56d1-4afe-8ed5-76dac56c748a" containerName="registry-server" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.478844 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.485970 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5c7kx"] Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.503930 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6575a5fc-632c-4f5b-927e-aa949b8fecbc-utilities\") pod \"community-operators-5c7kx\" (UID: \"6575a5fc-632c-4f5b-927e-aa949b8fecbc\") " pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.504001 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8xv5\" (UniqueName: \"kubernetes.io/projected/6575a5fc-632c-4f5b-927e-aa949b8fecbc-kube-api-access-v8xv5\") pod \"community-operators-5c7kx\" (UID: \"6575a5fc-632c-4f5b-927e-aa949b8fecbc\") " pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.504157 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6575a5fc-632c-4f5b-927e-aa949b8fecbc-catalog-content\") pod \"community-operators-5c7kx\" (UID: \"6575a5fc-632c-4f5b-927e-aa949b8fecbc\") " pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.606109 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6575a5fc-632c-4f5b-927e-aa949b8fecbc-catalog-content\") pod \"community-operators-5c7kx\" (UID: \"6575a5fc-632c-4f5b-927e-aa949b8fecbc\") " pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.606169 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6575a5fc-632c-4f5b-927e-aa949b8fecbc-utilities\") pod \"community-operators-5c7kx\" (UID: \"6575a5fc-632c-4f5b-927e-aa949b8fecbc\") " pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.606209 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8xv5\" (UniqueName: \"kubernetes.io/projected/6575a5fc-632c-4f5b-927e-aa949b8fecbc-kube-api-access-v8xv5\") pod \"community-operators-5c7kx\" (UID: \"6575a5fc-632c-4f5b-927e-aa949b8fecbc\") " pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.606764 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6575a5fc-632c-4f5b-927e-aa949b8fecbc-catalog-content\") pod \"community-operators-5c7kx\" (UID: \"6575a5fc-632c-4f5b-927e-aa949b8fecbc\") " pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.606902 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6575a5fc-632c-4f5b-927e-aa949b8fecbc-utilities\") pod \"community-operators-5c7kx\" (UID: \"6575a5fc-632c-4f5b-927e-aa949b8fecbc\") " pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.639526 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8xv5\" (UniqueName: \"kubernetes.io/projected/6575a5fc-632c-4f5b-927e-aa949b8fecbc-kube-api-access-v8xv5\") pod \"community-operators-5c7kx\" (UID: \"6575a5fc-632c-4f5b-927e-aa949b8fecbc\") " pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:31 crc kubenswrapper[4769]: I1006 07:41:31.801415 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:32 crc kubenswrapper[4769]: I1006 07:41:32.341852 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5c7kx"] Oct 06 07:41:32 crc kubenswrapper[4769]: I1006 07:41:32.867758 4769 generic.go:334] "Generic (PLEG): container finished" podID="6575a5fc-632c-4f5b-927e-aa949b8fecbc" containerID="bfabe89cff10f6631f6b19089fb028857f2c0cedae71319b682e6cfe0a266c56" exitCode=0 Oct 06 07:41:32 crc kubenswrapper[4769]: I1006 07:41:32.867827 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c7kx" event={"ID":"6575a5fc-632c-4f5b-927e-aa949b8fecbc","Type":"ContainerDied","Data":"bfabe89cff10f6631f6b19089fb028857f2c0cedae71319b682e6cfe0a266c56"} Oct 06 07:41:32 crc kubenswrapper[4769]: I1006 07:41:32.868125 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c7kx" event={"ID":"6575a5fc-632c-4f5b-927e-aa949b8fecbc","Type":"ContainerStarted","Data":"16271c61690c14c38b482ee8379e76e6edc5f8255d50f24457a05d1a5672f9c3"} Oct 06 07:41:33 crc kubenswrapper[4769]: I1006 07:41:33.061463 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6e6c-account-create-52h7l"] Oct 06 07:41:33 crc kubenswrapper[4769]: I1006 07:41:33.076719 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6e6c-account-create-52h7l"] Oct 06 07:41:34 crc kubenswrapper[4769]: I1006 07:41:34.180401 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61dcddc-b711-44c0-8b71-03cbb04c8f69" path="/var/lib/kubelet/pods/e61dcddc-b711-44c0-8b71-03cbb04c8f69/volumes" Oct 06 07:41:36 crc kubenswrapper[4769]: I1006 07:41:36.931503 4769 generic.go:334] "Generic (PLEG): container finished" podID="6575a5fc-632c-4f5b-927e-aa949b8fecbc" containerID="3b90370745c4881d1e973c7fea3b61cdebb1000bc4e5d14d3fbed3ed0045573a" exitCode=0 Oct 06 07:41:36 crc kubenswrapper[4769]: I1006 07:41:36.931605 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c7kx" event={"ID":"6575a5fc-632c-4f5b-927e-aa949b8fecbc","Type":"ContainerDied","Data":"3b90370745c4881d1e973c7fea3b61cdebb1000bc4e5d14d3fbed3ed0045573a"} Oct 06 07:41:37 crc kubenswrapper[4769]: I1006 07:41:37.943274 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c7kx" event={"ID":"6575a5fc-632c-4f5b-927e-aa949b8fecbc","Type":"ContainerStarted","Data":"5de70e2c8103bdfdfa29d700b752cfbc6edfcf4eaae65f9fc883512d3ebb88bf"} Oct 06 07:41:37 crc kubenswrapper[4769]: I1006 07:41:37.966796 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5c7kx" podStartSLOduration=2.410789304 podStartE2EDuration="6.966775069s" podCreationTimestamp="2025-10-06 07:41:31 +0000 UTC" firstStartedPulling="2025-10-06 07:41:32.870060155 +0000 UTC m=+1489.394341302" lastFinishedPulling="2025-10-06 07:41:37.4260459 +0000 UTC m=+1493.950327067" observedRunningTime="2025-10-06 07:41:37.957551938 +0000 UTC m=+1494.481833095" watchObservedRunningTime="2025-10-06 07:41:37.966775069 +0000 UTC m=+1494.491056216" Oct 06 07:41:38 crc kubenswrapper[4769]: I1006 07:41:38.957308 4769 generic.go:334] "Generic (PLEG): container finished" podID="d3e88895-8ab2-4b5c-a4ea-60cdeb335a77" containerID="e322e86ec263ccc73e5c229c355fc04f30e578d292941aa7a1a409613b7e3067" exitCode=2 Oct 06 07:41:38 crc kubenswrapper[4769]: I1006 07:41:38.957380 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" event={"ID":"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77","Type":"ContainerDied","Data":"e322e86ec263ccc73e5c229c355fc04f30e578d292941aa7a1a409613b7e3067"} Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.371470 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.501407 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-252v7\" (UniqueName: \"kubernetes.io/projected/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-kube-api-access-252v7\") pod \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.501703 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-inventory\") pod \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.501886 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-ssh-key\") pod \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.502060 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-bootstrap-combined-ca-bundle\") pod \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\" (UID: \"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77\") " Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.506554 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d3e88895-8ab2-4b5c-a4ea-60cdeb335a77" (UID: "d3e88895-8ab2-4b5c-a4ea-60cdeb335a77"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.521174 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-kube-api-access-252v7" (OuterVolumeSpecName: "kube-api-access-252v7") pod "d3e88895-8ab2-4b5c-a4ea-60cdeb335a77" (UID: "d3e88895-8ab2-4b5c-a4ea-60cdeb335a77"). InnerVolumeSpecName "kube-api-access-252v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.538563 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-inventory" (OuterVolumeSpecName: "inventory") pod "d3e88895-8ab2-4b5c-a4ea-60cdeb335a77" (UID: "d3e88895-8ab2-4b5c-a4ea-60cdeb335a77"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.553636 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d3e88895-8ab2-4b5c-a4ea-60cdeb335a77" (UID: "d3e88895-8ab2-4b5c-a4ea-60cdeb335a77"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.604182 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-252v7\" (UniqueName: \"kubernetes.io/projected/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-kube-api-access-252v7\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.604223 4769 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.604237 4769 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.604250 4769 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e88895-8ab2-4b5c-a4ea-60cdeb335a77-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.980925 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" event={"ID":"d3e88895-8ab2-4b5c-a4ea-60cdeb335a77","Type":"ContainerDied","Data":"16d804463e42b22ab8afbca0341e285236c11b14619517bcaad5894c0ff84eb8"} Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.980988 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d804463e42b22ab8afbca0341e285236c11b14619517bcaad5894c0ff84eb8" Oct 06 07:41:40 crc kubenswrapper[4769]: I1006 07:41:40.980994 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d" Oct 06 07:41:41 crc kubenswrapper[4769]: I1006 07:41:41.049220 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qx4qw"] Oct 06 07:41:41 crc kubenswrapper[4769]: I1006 07:41:41.057401 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qx4qw"] Oct 06 07:41:41 crc kubenswrapper[4769]: I1006 07:41:41.803522 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:41 crc kubenswrapper[4769]: I1006 07:41:41.803626 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:41 crc kubenswrapper[4769]: I1006 07:41:41.873833 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:42 crc kubenswrapper[4769]: I1006 07:41:42.047796 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-w6j2v"] Oct 06 07:41:42 crc kubenswrapper[4769]: I1006 07:41:42.061402 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-xhzkf"] Oct 06 07:41:42 crc kubenswrapper[4769]: I1006 07:41:42.072736 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-w6j2v"] Oct 06 07:41:42 crc kubenswrapper[4769]: I1006 07:41:42.083180 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-xhzkf"] Oct 06 07:41:42 crc kubenswrapper[4769]: I1006 07:41:42.175741 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="484e70cd-c319-46ae-8936-54c215567b03" path="/var/lib/kubelet/pods/484e70cd-c319-46ae-8936-54c215567b03/volumes" Oct 06 07:41:42 crc kubenswrapper[4769]: I1006 07:41:42.176504 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6a4579-eb83-4d6a-979b-375d1844efd0" path="/var/lib/kubelet/pods/7e6a4579-eb83-4d6a-979b-375d1844efd0/volumes" Oct 06 07:41:42 crc kubenswrapper[4769]: I1006 07:41:42.177074 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8263a2a5-f139-46de-bc0c-3457a8ffee1a" path="/var/lib/kubelet/pods/8263a2a5-f139-46de-bc0c-3457a8ffee1a/volumes" Oct 06 07:41:51 crc kubenswrapper[4769]: I1006 07:41:51.875129 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5c7kx" Oct 06 07:41:51 crc kubenswrapper[4769]: I1006 07:41:51.961811 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5c7kx"] Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.021268 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xdb5d"] Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.021531 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xdb5d" podUID="e2389387-bd73-49fb-b119-41f04a08f615" containerName="registry-server" containerID="cri-o://4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2" gracePeriod=2 Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.245653 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.245733 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.245782 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.246670 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.246728 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" gracePeriod=600 Oct 06 07:41:52 crc kubenswrapper[4769]: E1006 07:41:52.382207 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.502782 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.677321 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2389387-bd73-49fb-b119-41f04a08f615-catalog-content\") pod \"e2389387-bd73-49fb-b119-41f04a08f615\" (UID: \"e2389387-bd73-49fb-b119-41f04a08f615\") " Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.692727 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb494\" (UniqueName: \"kubernetes.io/projected/e2389387-bd73-49fb-b119-41f04a08f615-kube-api-access-xb494\") pod \"e2389387-bd73-49fb-b119-41f04a08f615\" (UID: \"e2389387-bd73-49fb-b119-41f04a08f615\") " Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.693908 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2389387-bd73-49fb-b119-41f04a08f615-utilities\") pod \"e2389387-bd73-49fb-b119-41f04a08f615\" (UID: \"e2389387-bd73-49fb-b119-41f04a08f615\") " Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.694951 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2389387-bd73-49fb-b119-41f04a08f615-utilities" (OuterVolumeSpecName: "utilities") pod "e2389387-bd73-49fb-b119-41f04a08f615" (UID: "e2389387-bd73-49fb-b119-41f04a08f615"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.702609 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2389387-bd73-49fb-b119-41f04a08f615-kube-api-access-xb494" (OuterVolumeSpecName: "kube-api-access-xb494") pod "e2389387-bd73-49fb-b119-41f04a08f615" (UID: "e2389387-bd73-49fb-b119-41f04a08f615"). InnerVolumeSpecName "kube-api-access-xb494". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.729637 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2389387-bd73-49fb-b119-41f04a08f615-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2389387-bd73-49fb-b119-41f04a08f615" (UID: "e2389387-bd73-49fb-b119-41f04a08f615"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.796620 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb494\" (UniqueName: \"kubernetes.io/projected/e2389387-bd73-49fb-b119-41f04a08f615-kube-api-access-xb494\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.796679 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2389387-bd73-49fb-b119-41f04a08f615-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:52 crc kubenswrapper[4769]: I1006 07:41:52.796689 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2389387-bd73-49fb-b119-41f04a08f615-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.120357 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" exitCode=0 Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.120456 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf"} Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.120521 4769 scope.go:117] "RemoveContainer" containerID="a10ffbcd338af3006a90bff9475390df2aa79af988aec76bc98f6ad864fb7a59" Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.121229 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:41:53 crc kubenswrapper[4769]: E1006 07:41:53.121512 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.126048 4769 generic.go:334] "Generic (PLEG): container finished" podID="e2389387-bd73-49fb-b119-41f04a08f615" containerID="4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2" exitCode=0 Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.126086 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdb5d" event={"ID":"e2389387-bd73-49fb-b119-41f04a08f615","Type":"ContainerDied","Data":"4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2"} Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.126108 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xdb5d" event={"ID":"e2389387-bd73-49fb-b119-41f04a08f615","Type":"ContainerDied","Data":"d67090afbecb9cf131214b2bdddd95024dbfe7e63e95336363e51bcea248834e"} Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.126133 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xdb5d" Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.180873 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xdb5d"] Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.187990 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xdb5d"] Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.202279 4769 scope.go:117] "RemoveContainer" containerID="4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2" Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.223747 4769 scope.go:117] "RemoveContainer" containerID="c1deb579e154ab1ec002bb2a628d038ab9566c458bc443e8f81a0ea9a3ca13b3" Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.248697 4769 scope.go:117] "RemoveContainer" containerID="48ee7f10a680bae049d9f60f2ea4667afff06ad7136abd8066e17a058072aa43" Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.309806 4769 scope.go:117] "RemoveContainer" containerID="4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2" Oct 06 07:41:53 crc kubenswrapper[4769]: E1006 07:41:53.310280 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2\": container with ID starting with 4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2 not found: ID does not exist" containerID="4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2" Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.310328 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2"} err="failed to get container status \"4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2\": rpc error: code = NotFound desc = could not find container \"4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2\": container with ID starting with 4e2180a68916722e273735d5844bfbb192b16fec45c366fb8625502b219ce2d2 not found: ID does not exist" Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.310359 4769 scope.go:117] "RemoveContainer" containerID="c1deb579e154ab1ec002bb2a628d038ab9566c458bc443e8f81a0ea9a3ca13b3" Oct 06 07:41:53 crc kubenswrapper[4769]: E1006 07:41:53.311017 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1deb579e154ab1ec002bb2a628d038ab9566c458bc443e8f81a0ea9a3ca13b3\": container with ID starting with c1deb579e154ab1ec002bb2a628d038ab9566c458bc443e8f81a0ea9a3ca13b3 not found: ID does not exist" containerID="c1deb579e154ab1ec002bb2a628d038ab9566c458bc443e8f81a0ea9a3ca13b3" Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.311059 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1deb579e154ab1ec002bb2a628d038ab9566c458bc443e8f81a0ea9a3ca13b3"} err="failed to get container status \"c1deb579e154ab1ec002bb2a628d038ab9566c458bc443e8f81a0ea9a3ca13b3\": rpc error: code = NotFound desc = could not find container \"c1deb579e154ab1ec002bb2a628d038ab9566c458bc443e8f81a0ea9a3ca13b3\": container with ID starting with c1deb579e154ab1ec002bb2a628d038ab9566c458bc443e8f81a0ea9a3ca13b3 not found: ID does not exist" Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.311097 4769 scope.go:117] "RemoveContainer" containerID="48ee7f10a680bae049d9f60f2ea4667afff06ad7136abd8066e17a058072aa43" Oct 06 07:41:53 crc kubenswrapper[4769]: E1006 07:41:53.312655 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48ee7f10a680bae049d9f60f2ea4667afff06ad7136abd8066e17a058072aa43\": container with ID starting with 48ee7f10a680bae049d9f60f2ea4667afff06ad7136abd8066e17a058072aa43 not found: ID does not exist" containerID="48ee7f10a680bae049d9f60f2ea4667afff06ad7136abd8066e17a058072aa43" Oct 06 07:41:53 crc kubenswrapper[4769]: I1006 07:41:53.312688 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48ee7f10a680bae049d9f60f2ea4667afff06ad7136abd8066e17a058072aa43"} err="failed to get container status \"48ee7f10a680bae049d9f60f2ea4667afff06ad7136abd8066e17a058072aa43\": rpc error: code = NotFound desc = could not find container \"48ee7f10a680bae049d9f60f2ea4667afff06ad7136abd8066e17a058072aa43\": container with ID starting with 48ee7f10a680bae049d9f60f2ea4667afff06ad7136abd8066e17a058072aa43 not found: ID does not exist" Oct 06 07:41:54 crc kubenswrapper[4769]: I1006 07:41:54.196433 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2389387-bd73-49fb-b119-41f04a08f615" path="/var/lib/kubelet/pods/e2389387-bd73-49fb-b119-41f04a08f615/volumes" Oct 06 07:41:54 crc kubenswrapper[4769]: I1006 07:41:54.743759 4769 scope.go:117] "RemoveContainer" containerID="046c3794949f7e4ce6872ad9f332a14039594222883b133d36400baacd1cb241" Oct 06 07:41:54 crc kubenswrapper[4769]: I1006 07:41:54.769540 4769 scope.go:117] "RemoveContainer" containerID="2664ece92b0ae82a388545fd80520480a9905668f2be84a1c8fa4446e7e887e2" Oct 06 07:41:54 crc kubenswrapper[4769]: I1006 07:41:54.825791 4769 scope.go:117] "RemoveContainer" containerID="72dba0e6f3893968b96cd0ad7f16f3f937382ca65e231ece87350ba09bcd4116" Oct 06 07:41:54 crc kubenswrapper[4769]: I1006 07:41:54.850267 4769 scope.go:117] "RemoveContainer" containerID="627511aae2d9b24bfafa4b6ba20a25c62edc3928f1ab91e66f6b26628035e0e9" Oct 06 07:41:54 crc kubenswrapper[4769]: I1006 07:41:54.884036 4769 scope.go:117] "RemoveContainer" containerID="6d305a070a689b91a0dacb908c774ed796da5a77b9b086ff52dbd65e2d3db2c7" Oct 06 07:41:54 crc kubenswrapper[4769]: I1006 07:41:54.930855 4769 scope.go:117] "RemoveContainer" containerID="4b285e7f16fb4df0a3cdc261b7146f5f58d0d58624eed958347a3ae2266ecc7e" Oct 06 07:41:54 crc kubenswrapper[4769]: I1006 07:41:54.960650 4769 scope.go:117] "RemoveContainer" containerID="aaf545cfe19bb0ecc8004b06dfc4c31ce6f753a526ab5193183bd41b3f22ea30" Oct 06 07:41:54 crc kubenswrapper[4769]: I1006 07:41:54.997866 4769 scope.go:117] "RemoveContainer" containerID="56a3a9c189436c45d04f7514e8736665125953129e88bdb092d761c3c998269b" Oct 06 07:41:55 crc kubenswrapper[4769]: I1006 07:41:55.034414 4769 scope.go:117] "RemoveContainer" containerID="bba27db932740fc3066c4e967ddc4180c87e3307f4f532f7cac6ef6588759196" Oct 06 07:41:55 crc kubenswrapper[4769]: I1006 07:41:55.037044 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3558-account-create-bv2br"] Oct 06 07:41:55 crc kubenswrapper[4769]: I1006 07:41:55.048331 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-051d-account-create-q6v8v"] Oct 06 07:41:55 crc kubenswrapper[4769]: I1006 07:41:55.054029 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fffa-account-create-lhx4k"] Oct 06 07:41:55 crc kubenswrapper[4769]: I1006 07:41:55.058256 4769 scope.go:117] "RemoveContainer" containerID="61f8d9b4c580899462a2cd040dd367cdf2e20f7cffb6d53352c4fe5dbdf17cea" Oct 06 07:41:55 crc kubenswrapper[4769]: I1006 07:41:55.061163 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3558-account-create-bv2br"] Oct 06 07:41:55 crc kubenswrapper[4769]: I1006 07:41:55.069179 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fffa-account-create-lhx4k"] Oct 06 07:41:55 crc kubenswrapper[4769]: I1006 07:41:55.076369 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-051d-account-create-q6v8v"] Oct 06 07:41:55 crc kubenswrapper[4769]: I1006 07:41:55.091666 4769 scope.go:117] "RemoveContainer" containerID="cf4dfec66ceae01ca17b72dfac9b58e81c4d2c2eebc1dd95ddfc0a51e621a00b" Oct 06 07:41:56 crc kubenswrapper[4769]: I1006 07:41:56.181344 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10374278-0491-41ce-8028-9d6a5c8bf677" path="/var/lib/kubelet/pods/10374278-0491-41ce-8028-9d6a5c8bf677/volumes" Oct 06 07:41:56 crc kubenswrapper[4769]: I1006 07:41:56.182558 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3802f25-aa75-4581-afc1-758dea0695d5" path="/var/lib/kubelet/pods/b3802f25-aa75-4581-afc1-758dea0695d5/volumes" Oct 06 07:41:56 crc kubenswrapper[4769]: I1006 07:41:56.183206 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c03827-a483-4240-afd5-34c086e9226a" path="/var/lib/kubelet/pods/e0c03827-a483-4240-afd5-34c086e9226a/volumes" Oct 06 07:41:59 crc kubenswrapper[4769]: I1006 07:41:59.040149 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x9tdl"] Oct 06 07:41:59 crc kubenswrapper[4769]: I1006 07:41:59.051493 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x9tdl"] Oct 06 07:42:00 crc kubenswrapper[4769]: I1006 07:42:00.175725 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b6f9be-40d3-47be-964f-e271e14a0d84" path="/var/lib/kubelet/pods/03b6f9be-40d3-47be-964f-e271e14a0d84/volumes" Oct 06 07:42:02 crc kubenswrapper[4769]: I1006 07:42:02.045712 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mqvfk"] Oct 06 07:42:02 crc kubenswrapper[4769]: I1006 07:42:02.054961 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mqvfk"] Oct 06 07:42:02 crc kubenswrapper[4769]: I1006 07:42:02.185882 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="586f85b0-c0bf-473c-9446-b9263c3ff95b" path="/var/lib/kubelet/pods/586f85b0-c0bf-473c-9446-b9263c3ff95b/volumes" Oct 06 07:42:05 crc kubenswrapper[4769]: I1006 07:42:05.166769 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:42:05 crc kubenswrapper[4769]: E1006 07:42:05.167477 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:42:18 crc kubenswrapper[4769]: I1006 07:42:18.167505 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:42:18 crc kubenswrapper[4769]: E1006 07:42:18.168891 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:42:25 crc kubenswrapper[4769]: I1006 07:42:25.036207 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-nl2f8"] Oct 06 07:42:25 crc kubenswrapper[4769]: I1006 07:42:25.056535 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-nl2f8"] Oct 06 07:42:26 crc kubenswrapper[4769]: I1006 07:42:26.181089 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55fda312-faf1-4c7d-9b03-0aca23b7d5cb" path="/var/lib/kubelet/pods/55fda312-faf1-4c7d-9b03-0aca23b7d5cb/volumes" Oct 06 07:42:27 crc kubenswrapper[4769]: I1006 07:42:27.051360 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-ll86w"] Oct 06 07:42:27 crc kubenswrapper[4769]: I1006 07:42:27.060257 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2ctln"] Oct 06 07:42:27 crc kubenswrapper[4769]: I1006 07:42:27.068912 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-ll86w"] Oct 06 07:42:27 crc kubenswrapper[4769]: I1006 07:42:27.079302 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2ctln"] Oct 06 07:42:28 crc kubenswrapper[4769]: I1006 07:42:28.187634 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a" path="/var/lib/kubelet/pods/0bc3dde2-e931-40c6-aadf-f7f03ed8ea7a/volumes" Oct 06 07:42:28 crc kubenswrapper[4769]: I1006 07:42:28.191045 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7fe511-77bf-4a50-b42a-3dee332f2a69" path="/var/lib/kubelet/pods/1b7fe511-77bf-4a50-b42a-3dee332f2a69/volumes" Oct 06 07:42:31 crc kubenswrapper[4769]: I1006 07:42:31.166237 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:42:31 crc kubenswrapper[4769]: E1006 07:42:31.166793 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:42:33 crc kubenswrapper[4769]: I1006 07:42:33.039369 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-thnwc"] Oct 06 07:42:33 crc kubenswrapper[4769]: I1006 07:42:33.050694 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-thnwc"] Oct 06 07:42:34 crc kubenswrapper[4769]: I1006 07:42:34.182335 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d89b00e-b803-4c22-b820-653e98f239b0" path="/var/lib/kubelet/pods/7d89b00e-b803-4c22-b820-653e98f239b0/volumes" Oct 06 07:42:43 crc kubenswrapper[4769]: I1006 07:42:43.168361 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:42:43 crc kubenswrapper[4769]: E1006 07:42:43.170025 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:42:51 crc kubenswrapper[4769]: I1006 07:42:51.037579 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qtgml"] Oct 06 07:42:51 crc kubenswrapper[4769]: I1006 07:42:51.047464 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qtgml"] Oct 06 07:42:52 crc kubenswrapper[4769]: I1006 07:42:52.178115 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dc18379-7117-430b-9d0f-65115eaedf51" path="/var/lib/kubelet/pods/1dc18379-7117-430b-9d0f-65115eaedf51/volumes" Oct 06 07:42:55 crc kubenswrapper[4769]: I1006 07:42:55.389986 4769 scope.go:117] "RemoveContainer" containerID="7531f4e7ec0d07dd5d70e987055e7039694b6866c08666adbce1ad5ade79dc7d" Oct 06 07:42:55 crc kubenswrapper[4769]: I1006 07:42:55.442854 4769 scope.go:117] "RemoveContainer" containerID="17de8ff848c3abf1fecac3fe5324e71097eb01eec0bc31a688c2da873fc16fb3" Oct 06 07:42:55 crc kubenswrapper[4769]: I1006 07:42:55.475107 4769 scope.go:117] "RemoveContainer" containerID="73fbbe7109399078438588082acac8981570a0e79d33bdf28bfce22f9f79aefd" Oct 06 07:42:55 crc kubenswrapper[4769]: I1006 07:42:55.530604 4769 scope.go:117] "RemoveContainer" containerID="f9bfa47e16b68e1996cffe7d0c916889cc632779b05f93c09d859f9fae755623" Oct 06 07:42:55 crc kubenswrapper[4769]: I1006 07:42:55.571477 4769 scope.go:117] "RemoveContainer" containerID="71eaac04018cfb7703bfb9c2a0ba922686e6516ce46b112a11e28376db44f53f" Oct 06 07:42:55 crc kubenswrapper[4769]: I1006 07:42:55.637200 4769 scope.go:117] "RemoveContainer" containerID="7f2ac8383615cce778273ed66df49510021c22f3683da21635506b0067f46184" Oct 06 07:42:55 crc kubenswrapper[4769]: I1006 07:42:55.670815 4769 scope.go:117] "RemoveContainer" containerID="544c825d3d584d3a785c9178b8c1f307caa89327e272758eae009f9dab7b1f94" Oct 06 07:42:55 crc kubenswrapper[4769]: I1006 07:42:55.709218 4769 scope.go:117] "RemoveContainer" containerID="16af253dc406622988b38eb18b6229779a434a3a3946dd2712943eeb0222f8ad" Oct 06 07:42:55 crc kubenswrapper[4769]: I1006 07:42:55.733961 4769 scope.go:117] "RemoveContainer" containerID="351ee1d2b7d8673491c67436b8edd38490d0fd8f258d97a932d5ed02240e277f" Oct 06 07:42:55 crc kubenswrapper[4769]: I1006 07:42:55.761469 4769 scope.go:117] "RemoveContainer" containerID="4fbf4553296c59fe875e546c9ee0c8c3961d8508f77f4fec5e2b65ad16682b4e" Oct 06 07:42:57 crc kubenswrapper[4769]: I1006 07:42:57.166783 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:42:57 crc kubenswrapper[4769]: E1006 07:42:57.167462 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.027309 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc"] Oct 06 07:42:58 crc kubenswrapper[4769]: E1006 07:42:58.027708 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2389387-bd73-49fb-b119-41f04a08f615" containerName="extract-content" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.027721 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2389387-bd73-49fb-b119-41f04a08f615" containerName="extract-content" Oct 06 07:42:58 crc kubenswrapper[4769]: E1006 07:42:58.027737 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2389387-bd73-49fb-b119-41f04a08f615" containerName="registry-server" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.027744 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2389387-bd73-49fb-b119-41f04a08f615" containerName="registry-server" Oct 06 07:42:58 crc kubenswrapper[4769]: E1006 07:42:58.027779 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e88895-8ab2-4b5c-a4ea-60cdeb335a77" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.027787 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e88895-8ab2-4b5c-a4ea-60cdeb335a77" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:42:58 crc kubenswrapper[4769]: E1006 07:42:58.027800 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2389387-bd73-49fb-b119-41f04a08f615" containerName="extract-utilities" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.027806 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2389387-bd73-49fb-b119-41f04a08f615" containerName="extract-utilities" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.027990 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2389387-bd73-49fb-b119-41f04a08f615" containerName="registry-server" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.028002 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e88895-8ab2-4b5c-a4ea-60cdeb335a77" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.028591 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.031872 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.031915 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.032555 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.032592 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6hsvg" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.039939 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc"] Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.133483 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.133587 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr22t\" (UniqueName: \"kubernetes.io/projected/67fef4ae-c185-4c7d-abba-2410461c1078-kube-api-access-gr22t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.133655 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.133698 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.235933 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.236021 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr22t\" (UniqueName: \"kubernetes.io/projected/67fef4ae-c185-4c7d-abba-2410461c1078-kube-api-access-gr22t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.236094 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.236141 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.242004 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.242134 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.247947 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.251386 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr22t\" (UniqueName: \"kubernetes.io/projected/67fef4ae-c185-4c7d-abba-2410461c1078-kube-api-access-gr22t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.356881 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.890559 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc"] Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.893510 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 07:42:58 crc kubenswrapper[4769]: I1006 07:42:58.901847 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" event={"ID":"67fef4ae-c185-4c7d-abba-2410461c1078","Type":"ContainerStarted","Data":"8f7c1f0ab28e73eb5e82616bebe3e4adf125002ccc1191a0ef1e400f9a3679cd"} Oct 06 07:43:00 crc kubenswrapper[4769]: I1006 07:43:00.945494 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" event={"ID":"67fef4ae-c185-4c7d-abba-2410461c1078","Type":"ContainerStarted","Data":"ab4a51aa5d3fc39c85d29c70ddb8230a14634830b31ac413f83f21bfc5f4797a"} Oct 06 07:43:00 crc kubenswrapper[4769]: I1006 07:43:00.963222 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" podStartSLOduration=2.087210189 podStartE2EDuration="2.963193881s" podCreationTimestamp="2025-10-06 07:42:58 +0000 UTC" firstStartedPulling="2025-10-06 07:42:58.893208612 +0000 UTC m=+1575.417489759" lastFinishedPulling="2025-10-06 07:42:59.769192294 +0000 UTC m=+1576.293473451" observedRunningTime="2025-10-06 07:43:00.962754639 +0000 UTC m=+1577.487035796" watchObservedRunningTime="2025-10-06 07:43:00.963193881 +0000 UTC m=+1577.487475068" Oct 06 07:43:11 crc kubenswrapper[4769]: I1006 07:43:11.166264 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:43:11 crc kubenswrapper[4769]: E1006 07:43:11.167358 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:43:24 crc kubenswrapper[4769]: I1006 07:43:24.173452 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:43:24 crc kubenswrapper[4769]: E1006 07:43:24.174286 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:43:31 crc kubenswrapper[4769]: I1006 07:43:31.037389 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-l5m6l"] Oct 06 07:43:31 crc kubenswrapper[4769]: I1006 07:43:31.046277 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8w99c"] Oct 06 07:43:31 crc kubenswrapper[4769]: I1006 07:43:31.053188 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-96clh"] Oct 06 07:43:31 crc kubenswrapper[4769]: I1006 07:43:31.062690 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-l5m6l"] Oct 06 07:43:31 crc kubenswrapper[4769]: I1006 07:43:31.069844 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8w99c"] Oct 06 07:43:31 crc kubenswrapper[4769]: I1006 07:43:31.076729 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-96clh"] Oct 06 07:43:32 crc kubenswrapper[4769]: I1006 07:43:32.178166 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36a24429-6273-4103-a2b6-0e570b0bd532" path="/var/lib/kubelet/pods/36a24429-6273-4103-a2b6-0e570b0bd532/volumes" Oct 06 07:43:32 crc kubenswrapper[4769]: I1006 07:43:32.179484 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="870fca03-ee61-4dfc-b9d2-53e3f1ff3694" path="/var/lib/kubelet/pods/870fca03-ee61-4dfc-b9d2-53e3f1ff3694/volumes" Oct 06 07:43:32 crc kubenswrapper[4769]: I1006 07:43:32.180957 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a91c7119-38b5-427e-a20a-010e2fc8b788" path="/var/lib/kubelet/pods/a91c7119-38b5-427e-a20a-010e2fc8b788/volumes" Oct 06 07:43:34 crc kubenswrapper[4769]: I1006 07:43:34.263103 4769 generic.go:334] "Generic (PLEG): container finished" podID="67fef4ae-c185-4c7d-abba-2410461c1078" containerID="ab4a51aa5d3fc39c85d29c70ddb8230a14634830b31ac413f83f21bfc5f4797a" exitCode=2 Oct 06 07:43:34 crc kubenswrapper[4769]: I1006 07:43:34.263195 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" event={"ID":"67fef4ae-c185-4c7d-abba-2410461c1078","Type":"ContainerDied","Data":"ab4a51aa5d3fc39c85d29c70ddb8230a14634830b31ac413f83f21bfc5f4797a"} Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.680027 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.864406 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-bootstrap-combined-ca-bundle\") pod \"67fef4ae-c185-4c7d-abba-2410461c1078\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.864548 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr22t\" (UniqueName: \"kubernetes.io/projected/67fef4ae-c185-4c7d-abba-2410461c1078-kube-api-access-gr22t\") pod \"67fef4ae-c185-4c7d-abba-2410461c1078\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.864757 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-inventory\") pod \"67fef4ae-c185-4c7d-abba-2410461c1078\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.864829 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-ssh-key\") pod \"67fef4ae-c185-4c7d-abba-2410461c1078\" (UID: \"67fef4ae-c185-4c7d-abba-2410461c1078\") " Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.870623 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67fef4ae-c185-4c7d-abba-2410461c1078-kube-api-access-gr22t" (OuterVolumeSpecName: "kube-api-access-gr22t") pod "67fef4ae-c185-4c7d-abba-2410461c1078" (UID: "67fef4ae-c185-4c7d-abba-2410461c1078"). InnerVolumeSpecName "kube-api-access-gr22t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.871538 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "67fef4ae-c185-4c7d-abba-2410461c1078" (UID: "67fef4ae-c185-4c7d-abba-2410461c1078"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.898540 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "67fef4ae-c185-4c7d-abba-2410461c1078" (UID: "67fef4ae-c185-4c7d-abba-2410461c1078"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.902004 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-inventory" (OuterVolumeSpecName: "inventory") pod "67fef4ae-c185-4c7d-abba-2410461c1078" (UID: "67fef4ae-c185-4c7d-abba-2410461c1078"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.967561 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr22t\" (UniqueName: \"kubernetes.io/projected/67fef4ae-c185-4c7d-abba-2410461c1078-kube-api-access-gr22t\") on node \"crc\" DevicePath \"\"" Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.967629 4769 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.967644 4769 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 07:43:35 crc kubenswrapper[4769]: I1006 07:43:35.967657 4769 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fef4ae-c185-4c7d-abba-2410461c1078-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:43:36 crc kubenswrapper[4769]: I1006 07:43:36.282215 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" event={"ID":"67fef4ae-c185-4c7d-abba-2410461c1078","Type":"ContainerDied","Data":"8f7c1f0ab28e73eb5e82616bebe3e4adf125002ccc1191a0ef1e400f9a3679cd"} Oct 06 07:43:36 crc kubenswrapper[4769]: I1006 07:43:36.282273 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f7c1f0ab28e73eb5e82616bebe3e4adf125002ccc1191a0ef1e400f9a3679cd" Oct 06 07:43:36 crc kubenswrapper[4769]: I1006 07:43:36.282614 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc" Oct 06 07:43:37 crc kubenswrapper[4769]: I1006 07:43:37.166249 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:43:37 crc kubenswrapper[4769]: E1006 07:43:37.166796 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:43:41 crc kubenswrapper[4769]: I1006 07:43:41.040199 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-51fe-account-create-bxxcd"] Oct 06 07:43:41 crc kubenswrapper[4769]: I1006 07:43:41.050985 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e12c-account-create-skw9b"] Oct 06 07:43:41 crc kubenswrapper[4769]: I1006 07:43:41.059077 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c2d7-account-create-wqxvp"] Oct 06 07:43:41 crc kubenswrapper[4769]: I1006 07:43:41.065685 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e12c-account-create-skw9b"] Oct 06 07:43:41 crc kubenswrapper[4769]: I1006 07:43:41.072604 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-51fe-account-create-bxxcd"] Oct 06 07:43:41 crc kubenswrapper[4769]: I1006 07:43:41.078935 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-c2d7-account-create-wqxvp"] Oct 06 07:43:42 crc kubenswrapper[4769]: I1006 07:43:42.176392 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dfbf184-0c47-4c5b-aa47-70d7605b437b" path="/var/lib/kubelet/pods/1dfbf184-0c47-4c5b-aa47-70d7605b437b/volumes" Oct 06 07:43:42 crc kubenswrapper[4769]: I1006 07:43:42.176915 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="797de656-8644-4e69-9bd9-a568002a8413" path="/var/lib/kubelet/pods/797de656-8644-4e69-9bd9-a568002a8413/volumes" Oct 06 07:43:42 crc kubenswrapper[4769]: I1006 07:43:42.177379 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d09c3d22-3073-474f-8465-db74ea316281" path="/var/lib/kubelet/pods/d09c3d22-3073-474f-8465-db74ea316281/volumes" Oct 06 07:43:52 crc kubenswrapper[4769]: I1006 07:43:52.167293 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:43:52 crc kubenswrapper[4769]: E1006 07:43:52.168760 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:43:56 crc kubenswrapper[4769]: I1006 07:43:56.035931 4769 scope.go:117] "RemoveContainer" containerID="34c0a60d91a80bffcae5aa0025fcf58100e9696827a3daa874892d102d47fd2c" Oct 06 07:43:56 crc kubenswrapper[4769]: I1006 07:43:56.067221 4769 scope.go:117] "RemoveContainer" containerID="74912788c10dd40d7c54002e535982f59c4d05f2f3d25404572e66686d339ad0" Oct 06 07:43:56 crc kubenswrapper[4769]: I1006 07:43:56.148161 4769 scope.go:117] "RemoveContainer" containerID="cc3df954b8ad3066e81b6b1e4123c8a3ea7fed55652ea0f3c1a149b02c183608" Oct 06 07:43:56 crc kubenswrapper[4769]: I1006 07:43:56.183120 4769 scope.go:117] "RemoveContainer" containerID="aff6c35585f688a58e3558544917702eff0660dbe2ec31f2f2692f03ff86994d" Oct 06 07:43:56 crc kubenswrapper[4769]: I1006 07:43:56.225670 4769 scope.go:117] "RemoveContainer" containerID="44d7dcae3bf3d57c11c33ed7807a2e948a246505c1ecceb458b8fd040ea7a960" Oct 06 07:43:56 crc kubenswrapper[4769]: I1006 07:43:56.261822 4769 scope.go:117] "RemoveContainer" containerID="c00d6bac6380024a3b5ffe5ef9657d09f1d19ea67f9bf864c106c4b32b14d3ac" Oct 06 07:44:07 crc kubenswrapper[4769]: I1006 07:44:07.166267 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:44:07 crc kubenswrapper[4769]: E1006 07:44:07.167386 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:44:09 crc kubenswrapper[4769]: I1006 07:44:09.055169 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-njskw"] Oct 06 07:44:09 crc kubenswrapper[4769]: I1006 07:44:09.061874 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-njskw"] Oct 06 07:44:10 crc kubenswrapper[4769]: I1006 07:44:10.202850 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f3ed31e-1322-4476-85f6-398b2366a129" path="/var/lib/kubelet/pods/8f3ed31e-1322-4476-85f6-398b2366a129/volumes" Oct 06 07:44:18 crc kubenswrapper[4769]: I1006 07:44:18.167928 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:44:18 crc kubenswrapper[4769]: E1006 07:44:18.169214 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:44:32 crc kubenswrapper[4769]: I1006 07:44:32.042771 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-p94mc"] Oct 06 07:44:32 crc kubenswrapper[4769]: I1006 07:44:32.052433 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-p94mc"] Oct 06 07:44:32 crc kubenswrapper[4769]: I1006 07:44:32.165647 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:44:32 crc kubenswrapper[4769]: E1006 07:44:32.165901 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:44:32 crc kubenswrapper[4769]: I1006 07:44:32.175622 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="529a47ef-9cb3-4d79-9227-66910a7389e9" path="/var/lib/kubelet/pods/529a47ef-9cb3-4d79-9227-66910a7389e9/volumes" Oct 06 07:44:33 crc kubenswrapper[4769]: I1006 07:44:33.030542 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-csxd9"] Oct 06 07:44:33 crc kubenswrapper[4769]: I1006 07:44:33.040157 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-csxd9"] Oct 06 07:44:34 crc kubenswrapper[4769]: I1006 07:44:34.179149 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c3b8223-fede-4508-a234-c32e9cc406c5" path="/var/lib/kubelet/pods/6c3b8223-fede-4508-a234-c32e9cc406c5/volumes" Oct 06 07:44:47 crc kubenswrapper[4769]: I1006 07:44:47.166886 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:44:47 crc kubenswrapper[4769]: E1006 07:44:47.167640 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:44:56 crc kubenswrapper[4769]: I1006 07:44:56.395790 4769 scope.go:117] "RemoveContainer" containerID="d738b0ef934e51c1f84b7689c836873ad7e231b9a2184ceb0ba02c92efa7418a" Oct 06 07:44:56 crc kubenswrapper[4769]: I1006 07:44:56.449108 4769 scope.go:117] "RemoveContainer" containerID="b92a3e0e92d427e1c3033bf48992612173b7ddbb4ce094506db95c30afc9d34b" Oct 06 07:44:56 crc kubenswrapper[4769]: I1006 07:44:56.502829 4769 scope.go:117] "RemoveContainer" containerID="eda3c60fd092850f9b4f9387e36dfc0bd4ce5830f25ba159ef40f9df955560cb" Oct 06 07:44:59 crc kubenswrapper[4769]: I1006 07:44:59.166889 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:44:59 crc kubenswrapper[4769]: E1006 07:44:59.167753 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.166837 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5"] Oct 06 07:45:00 crc kubenswrapper[4769]: E1006 07:45:00.167525 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67fef4ae-c185-4c7d-abba-2410461c1078" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.167539 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="67fef4ae-c185-4c7d-abba-2410461c1078" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.167737 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="67fef4ae-c185-4c7d-abba-2410461c1078" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.168649 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.170966 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.174893 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.233063 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5"] Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.271034 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-config-volume\") pod \"collect-profiles-29328945-v5xt5\" (UID: \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.271089 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct8fm\" (UniqueName: \"kubernetes.io/projected/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-kube-api-access-ct8fm\") pod \"collect-profiles-29328945-v5xt5\" (UID: \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.271257 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-secret-volume\") pod \"collect-profiles-29328945-v5xt5\" (UID: \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.373166 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-secret-volume\") pod \"collect-profiles-29328945-v5xt5\" (UID: \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.373635 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-config-volume\") pod \"collect-profiles-29328945-v5xt5\" (UID: \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.373716 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct8fm\" (UniqueName: \"kubernetes.io/projected/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-kube-api-access-ct8fm\") pod \"collect-profiles-29328945-v5xt5\" (UID: \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.374551 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-config-volume\") pod \"collect-profiles-29328945-v5xt5\" (UID: \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.384979 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-secret-volume\") pod \"collect-profiles-29328945-v5xt5\" (UID: \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.390053 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct8fm\" (UniqueName: \"kubernetes.io/projected/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-kube-api-access-ct8fm\") pod \"collect-profiles-29328945-v5xt5\" (UID: \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.548085 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:00 crc kubenswrapper[4769]: I1006 07:45:00.981313 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5"] Oct 06 07:45:01 crc kubenswrapper[4769]: I1006 07:45:01.089923 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" event={"ID":"3a49fa2d-6794-4f68-b6d2-a426cfc5724c","Type":"ContainerStarted","Data":"629c6fd3a7ff12a7da074ecf6e07c59a67fba4e9650245e65ebe0a22b27927cc"} Oct 06 07:45:02 crc kubenswrapper[4769]: I1006 07:45:02.102731 4769 generic.go:334] "Generic (PLEG): container finished" podID="3a49fa2d-6794-4f68-b6d2-a426cfc5724c" containerID="58ef2d37c3345429e167383f739bd501f6720fcbff31c460a45aacf5ef61f319" exitCode=0 Oct 06 07:45:02 crc kubenswrapper[4769]: I1006 07:45:02.102811 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" event={"ID":"3a49fa2d-6794-4f68-b6d2-a426cfc5724c","Type":"ContainerDied","Data":"58ef2d37c3345429e167383f739bd501f6720fcbff31c460a45aacf5ef61f319"} Oct 06 07:45:03 crc kubenswrapper[4769]: I1006 07:45:03.491324 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:03 crc kubenswrapper[4769]: I1006 07:45:03.634064 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-config-volume\") pod \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\" (UID: \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\") " Oct 06 07:45:03 crc kubenswrapper[4769]: I1006 07:45:03.634124 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct8fm\" (UniqueName: \"kubernetes.io/projected/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-kube-api-access-ct8fm\") pod \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\" (UID: \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\") " Oct 06 07:45:03 crc kubenswrapper[4769]: I1006 07:45:03.634164 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-secret-volume\") pod \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\" (UID: \"3a49fa2d-6794-4f68-b6d2-a426cfc5724c\") " Oct 06 07:45:03 crc kubenswrapper[4769]: I1006 07:45:03.635084 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-config-volume" (OuterVolumeSpecName: "config-volume") pod "3a49fa2d-6794-4f68-b6d2-a426cfc5724c" (UID: "3a49fa2d-6794-4f68-b6d2-a426cfc5724c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 07:45:03 crc kubenswrapper[4769]: I1006 07:45:03.641629 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-kube-api-access-ct8fm" (OuterVolumeSpecName: "kube-api-access-ct8fm") pod "3a49fa2d-6794-4f68-b6d2-a426cfc5724c" (UID: "3a49fa2d-6794-4f68-b6d2-a426cfc5724c"). InnerVolumeSpecName "kube-api-access-ct8fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:45:03 crc kubenswrapper[4769]: I1006 07:45:03.641638 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3a49fa2d-6794-4f68-b6d2-a426cfc5724c" (UID: "3a49fa2d-6794-4f68-b6d2-a426cfc5724c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:45:03 crc kubenswrapper[4769]: I1006 07:45:03.736952 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 07:45:03 crc kubenswrapper[4769]: I1006 07:45:03.736996 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct8fm\" (UniqueName: \"kubernetes.io/projected/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-kube-api-access-ct8fm\") on node \"crc\" DevicePath \"\"" Oct 06 07:45:03 crc kubenswrapper[4769]: I1006 07:45:03.737014 4769 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a49fa2d-6794-4f68-b6d2-a426cfc5724c-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 07:45:04 crc kubenswrapper[4769]: I1006 07:45:04.124960 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" event={"ID":"3a49fa2d-6794-4f68-b6d2-a426cfc5724c","Type":"ContainerDied","Data":"629c6fd3a7ff12a7da074ecf6e07c59a67fba4e9650245e65ebe0a22b27927cc"} Oct 06 07:45:04 crc kubenswrapper[4769]: I1006 07:45:04.125000 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5" Oct 06 07:45:04 crc kubenswrapper[4769]: I1006 07:45:04.125018 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="629c6fd3a7ff12a7da074ecf6e07c59a67fba4e9650245e65ebe0a22b27927cc" Oct 06 07:45:14 crc kubenswrapper[4769]: I1006 07:45:14.182959 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:45:14 crc kubenswrapper[4769]: E1006 07:45:14.184399 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:45:16 crc kubenswrapper[4769]: I1006 07:45:16.050675 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-6d8pt"] Oct 06 07:45:16 crc kubenswrapper[4769]: I1006 07:45:16.057686 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-6d8pt"] Oct 06 07:45:16 crc kubenswrapper[4769]: I1006 07:45:16.182202 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0" path="/var/lib/kubelet/pods/4de9aadc-6e57-40a8-8ecb-c2256ca9f3e0/volumes" Oct 06 07:45:25 crc kubenswrapper[4769]: I1006 07:45:25.166737 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:45:25 crc kubenswrapper[4769]: E1006 07:45:25.167740 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:45:38 crc kubenswrapper[4769]: I1006 07:45:38.166349 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:45:38 crc kubenswrapper[4769]: E1006 07:45:38.167189 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:45:51 crc kubenswrapper[4769]: I1006 07:45:51.165962 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:45:51 crc kubenswrapper[4769]: E1006 07:45:51.167886 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:45:56 crc kubenswrapper[4769]: I1006 07:45:56.659225 4769 scope.go:117] "RemoveContainer" containerID="ee217f44e19c9940ca1324e099b524225b6ba3ec4d9f80ed74cac3d9e25d8c02" Oct 06 07:46:06 crc kubenswrapper[4769]: I1006 07:46:06.165861 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:46:06 crc kubenswrapper[4769]: E1006 07:46:06.166692 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.035457 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr"] Oct 06 07:46:14 crc kubenswrapper[4769]: E1006 07:46:14.036564 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a49fa2d-6794-4f68-b6d2-a426cfc5724c" containerName="collect-profiles" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.036582 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a49fa2d-6794-4f68-b6d2-a426cfc5724c" containerName="collect-profiles" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.036807 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a49fa2d-6794-4f68-b6d2-a426cfc5724c" containerName="collect-profiles" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.037664 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.039962 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.040219 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.040247 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.040219 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6hsvg" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.051595 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr"] Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.170661 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.170732 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.170787 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.170830 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdm5t\" (UniqueName: \"kubernetes.io/projected/a13d7d14-1273-4616-9260-fb702b0948f2-kube-api-access-tdm5t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.272936 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdm5t\" (UniqueName: \"kubernetes.io/projected/a13d7d14-1273-4616-9260-fb702b0948f2-kube-api-access-tdm5t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.273093 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.273159 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.273250 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.280132 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.280874 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.281034 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.289505 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdm5t\" (UniqueName: \"kubernetes.io/projected/a13d7d14-1273-4616-9260-fb702b0948f2-kube-api-access-tdm5t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.363798 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:14 crc kubenswrapper[4769]: I1006 07:46:14.944466 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr"] Oct 06 07:46:15 crc kubenswrapper[4769]: I1006 07:46:15.896533 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" event={"ID":"a13d7d14-1273-4616-9260-fb702b0948f2","Type":"ContainerStarted","Data":"c7211ccee64ba7a23ea4fae7d04742b845d7ea7994d105da3d20bfae04ea4ad3"} Oct 06 07:46:16 crc kubenswrapper[4769]: I1006 07:46:16.906715 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" event={"ID":"a13d7d14-1273-4616-9260-fb702b0948f2","Type":"ContainerStarted","Data":"9eb5c3c775124c0a0650ff390bed94b9799cb3a534117f39cc9b31e4003e63c4"} Oct 06 07:46:21 crc kubenswrapper[4769]: I1006 07:46:21.166116 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:46:21 crc kubenswrapper[4769]: E1006 07:46:21.166812 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:46:33 crc kubenswrapper[4769]: I1006 07:46:33.165825 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:46:33 crc kubenswrapper[4769]: E1006 07:46:33.166724 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:46:47 crc kubenswrapper[4769]: I1006 07:46:47.166374 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:46:47 crc kubenswrapper[4769]: E1006 07:46:47.167210 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:46:52 crc kubenswrapper[4769]: I1006 07:46:52.194905 4769 generic.go:334] "Generic (PLEG): container finished" podID="a13d7d14-1273-4616-9260-fb702b0948f2" containerID="9eb5c3c775124c0a0650ff390bed94b9799cb3a534117f39cc9b31e4003e63c4" exitCode=2 Oct 06 07:46:52 crc kubenswrapper[4769]: I1006 07:46:52.194996 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" event={"ID":"a13d7d14-1273-4616-9260-fb702b0948f2","Type":"ContainerDied","Data":"9eb5c3c775124c0a0650ff390bed94b9799cb3a534117f39cc9b31e4003e63c4"} Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.626151 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.742621 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-ssh-key\") pod \"a13d7d14-1273-4616-9260-fb702b0948f2\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.743035 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-inventory\") pod \"a13d7d14-1273-4616-9260-fb702b0948f2\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.743338 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdm5t\" (UniqueName: \"kubernetes.io/projected/a13d7d14-1273-4616-9260-fb702b0948f2-kube-api-access-tdm5t\") pod \"a13d7d14-1273-4616-9260-fb702b0948f2\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.743445 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-bootstrap-combined-ca-bundle\") pod \"a13d7d14-1273-4616-9260-fb702b0948f2\" (UID: \"a13d7d14-1273-4616-9260-fb702b0948f2\") " Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.748483 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a13d7d14-1273-4616-9260-fb702b0948f2" (UID: "a13d7d14-1273-4616-9260-fb702b0948f2"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.756338 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a13d7d14-1273-4616-9260-fb702b0948f2-kube-api-access-tdm5t" (OuterVolumeSpecName: "kube-api-access-tdm5t") pod "a13d7d14-1273-4616-9260-fb702b0948f2" (UID: "a13d7d14-1273-4616-9260-fb702b0948f2"). InnerVolumeSpecName "kube-api-access-tdm5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.773774 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-inventory" (OuterVolumeSpecName: "inventory") pod "a13d7d14-1273-4616-9260-fb702b0948f2" (UID: "a13d7d14-1273-4616-9260-fb702b0948f2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.774920 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a13d7d14-1273-4616-9260-fb702b0948f2" (UID: "a13d7d14-1273-4616-9260-fb702b0948f2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.847964 4769 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.848006 4769 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.848021 4769 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a13d7d14-1273-4616-9260-fb702b0948f2-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 07:46:53 crc kubenswrapper[4769]: I1006 07:46:53.848031 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdm5t\" (UniqueName: \"kubernetes.io/projected/a13d7d14-1273-4616-9260-fb702b0948f2-kube-api-access-tdm5t\") on node \"crc\" DevicePath \"\"" Oct 06 07:46:54 crc kubenswrapper[4769]: I1006 07:46:54.216632 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" event={"ID":"a13d7d14-1273-4616-9260-fb702b0948f2","Type":"ContainerDied","Data":"c7211ccee64ba7a23ea4fae7d04742b845d7ea7994d105da3d20bfae04ea4ad3"} Oct 06 07:46:54 crc kubenswrapper[4769]: I1006 07:46:54.216672 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7211ccee64ba7a23ea4fae7d04742b845d7ea7994d105da3d20bfae04ea4ad3" Oct 06 07:46:54 crc kubenswrapper[4769]: I1006 07:46:54.216672 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr" Oct 06 07:47:01 crc kubenswrapper[4769]: I1006 07:47:01.166715 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:47:02 crc kubenswrapper[4769]: I1006 07:47:02.305727 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"0f1c10d6a1ba3d360ab6b39112fe6532698257e8836633cb0fbe16d4bfa223b2"} Oct 06 07:49:22 crc kubenswrapper[4769]: I1006 07:49:22.246063 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:49:22 crc kubenswrapper[4769]: I1006 07:49:22.247819 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:49:52 crc kubenswrapper[4769]: I1006 07:49:52.245049 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:49:52 crc kubenswrapper[4769]: I1006 07:49:52.245672 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:50:22 crc kubenswrapper[4769]: I1006 07:50:22.245887 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:50:22 crc kubenswrapper[4769]: I1006 07:50:22.246556 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:50:22 crc kubenswrapper[4769]: I1006 07:50:22.246609 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:50:22 crc kubenswrapper[4769]: I1006 07:50:22.247454 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f1c10d6a1ba3d360ab6b39112fe6532698257e8836633cb0fbe16d4bfa223b2"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 07:50:22 crc kubenswrapper[4769]: I1006 07:50:22.247551 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://0f1c10d6a1ba3d360ab6b39112fe6532698257e8836633cb0fbe16d4bfa223b2" gracePeriod=600 Oct 06 07:50:23 crc kubenswrapper[4769]: I1006 07:50:23.275752 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="0f1c10d6a1ba3d360ab6b39112fe6532698257e8836633cb0fbe16d4bfa223b2" exitCode=0 Oct 06 07:50:23 crc kubenswrapper[4769]: I1006 07:50:23.276339 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"0f1c10d6a1ba3d360ab6b39112fe6532698257e8836633cb0fbe16d4bfa223b2"} Oct 06 07:50:23 crc kubenswrapper[4769]: I1006 07:50:23.276382 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871"} Oct 06 07:50:23 crc kubenswrapper[4769]: I1006 07:50:23.276414 4769 scope.go:117] "RemoveContainer" containerID="682a6a988a2a3da6d6efbba13bb10feff769054386ca709784f85ccb8b3f1adf" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.302800 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gvckt"] Oct 06 07:51:07 crc kubenswrapper[4769]: E1006 07:51:07.303863 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a13d7d14-1273-4616-9260-fb702b0948f2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.303878 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a13d7d14-1273-4616-9260-fb702b0948f2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.304084 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a13d7d14-1273-4616-9260-fb702b0948f2" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.305409 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.326575 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvckt"] Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.406051 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e63ce0-5652-48f5-a270-f4eab1268bd1-utilities\") pod \"redhat-marketplace-gvckt\" (UID: \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\") " pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.406998 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4gsd\" (UniqueName: \"kubernetes.io/projected/d7e63ce0-5652-48f5-a270-f4eab1268bd1-kube-api-access-m4gsd\") pod \"redhat-marketplace-gvckt\" (UID: \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\") " pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.407614 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e63ce0-5652-48f5-a270-f4eab1268bd1-catalog-content\") pod \"redhat-marketplace-gvckt\" (UID: \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\") " pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.509463 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e63ce0-5652-48f5-a270-f4eab1268bd1-utilities\") pod \"redhat-marketplace-gvckt\" (UID: \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\") " pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.509726 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4gsd\" (UniqueName: \"kubernetes.io/projected/d7e63ce0-5652-48f5-a270-f4eab1268bd1-kube-api-access-m4gsd\") pod \"redhat-marketplace-gvckt\" (UID: \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\") " pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.509813 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e63ce0-5652-48f5-a270-f4eab1268bd1-catalog-content\") pod \"redhat-marketplace-gvckt\" (UID: \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\") " pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.510207 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e63ce0-5652-48f5-a270-f4eab1268bd1-utilities\") pod \"redhat-marketplace-gvckt\" (UID: \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\") " pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.510222 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e63ce0-5652-48f5-a270-f4eab1268bd1-catalog-content\") pod \"redhat-marketplace-gvckt\" (UID: \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\") " pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.544561 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4gsd\" (UniqueName: \"kubernetes.io/projected/d7e63ce0-5652-48f5-a270-f4eab1268bd1-kube-api-access-m4gsd\") pod \"redhat-marketplace-gvckt\" (UID: \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\") " pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:07 crc kubenswrapper[4769]: I1006 07:51:07.644665 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:08 crc kubenswrapper[4769]: I1006 07:51:08.084044 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvckt"] Oct 06 07:51:08 crc kubenswrapper[4769]: I1006 07:51:08.718110 4769 generic.go:334] "Generic (PLEG): container finished" podID="d7e63ce0-5652-48f5-a270-f4eab1268bd1" containerID="9908793846b2646108882d7907a7d3b36150cdc312ba967b61ecb8e5f877a694" exitCode=0 Oct 06 07:51:08 crc kubenswrapper[4769]: I1006 07:51:08.718202 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvckt" event={"ID":"d7e63ce0-5652-48f5-a270-f4eab1268bd1","Type":"ContainerDied","Data":"9908793846b2646108882d7907a7d3b36150cdc312ba967b61ecb8e5f877a694"} Oct 06 07:51:08 crc kubenswrapper[4769]: I1006 07:51:08.718626 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvckt" event={"ID":"d7e63ce0-5652-48f5-a270-f4eab1268bd1","Type":"ContainerStarted","Data":"82142b011a3a5acf590c5c26323e73a6e2830d76656f729a1c3ec28dccfe8fbb"} Oct 06 07:51:08 crc kubenswrapper[4769]: I1006 07:51:08.722250 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 07:51:09 crc kubenswrapper[4769]: I1006 07:51:09.731618 4769 generic.go:334] "Generic (PLEG): container finished" podID="d7e63ce0-5652-48f5-a270-f4eab1268bd1" containerID="0d46c7d720e6b0bac9dfe7b5db98ec757bff643b68d05b16061ba75cafb6de1a" exitCode=0 Oct 06 07:51:09 crc kubenswrapper[4769]: I1006 07:51:09.732099 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvckt" event={"ID":"d7e63ce0-5652-48f5-a270-f4eab1268bd1","Type":"ContainerDied","Data":"0d46c7d720e6b0bac9dfe7b5db98ec757bff643b68d05b16061ba75cafb6de1a"} Oct 06 07:51:10 crc kubenswrapper[4769]: I1006 07:51:10.742742 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvckt" event={"ID":"d7e63ce0-5652-48f5-a270-f4eab1268bd1","Type":"ContainerStarted","Data":"9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d"} Oct 06 07:51:10 crc kubenswrapper[4769]: I1006 07:51:10.772442 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gvckt" podStartSLOduration=2.379309931 podStartE2EDuration="3.772403653s" podCreationTimestamp="2025-10-06 07:51:07 +0000 UTC" firstStartedPulling="2025-10-06 07:51:08.721065143 +0000 UTC m=+2065.245346310" lastFinishedPulling="2025-10-06 07:51:10.114158885 +0000 UTC m=+2066.638440032" observedRunningTime="2025-10-06 07:51:10.764433034 +0000 UTC m=+2067.288714191" watchObservedRunningTime="2025-10-06 07:51:10.772403653 +0000 UTC m=+2067.296684810" Oct 06 07:51:17 crc kubenswrapper[4769]: I1006 07:51:17.645486 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:17 crc kubenswrapper[4769]: I1006 07:51:17.646236 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:17 crc kubenswrapper[4769]: I1006 07:51:17.691344 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:17 crc kubenswrapper[4769]: I1006 07:51:17.880970 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:17 crc kubenswrapper[4769]: I1006 07:51:17.929578 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvckt"] Oct 06 07:51:19 crc kubenswrapper[4769]: I1006 07:51:19.817032 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gvckt" podUID="d7e63ce0-5652-48f5-a270-f4eab1268bd1" containerName="registry-server" containerID="cri-o://9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d" gracePeriod=2 Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.249614 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.399276 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e63ce0-5652-48f5-a270-f4eab1268bd1-utilities\") pod \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\" (UID: \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\") " Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.399400 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e63ce0-5652-48f5-a270-f4eab1268bd1-catalog-content\") pod \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\" (UID: \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\") " Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.399522 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4gsd\" (UniqueName: \"kubernetes.io/projected/d7e63ce0-5652-48f5-a270-f4eab1268bd1-kube-api-access-m4gsd\") pod \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\" (UID: \"d7e63ce0-5652-48f5-a270-f4eab1268bd1\") " Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.400259 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e63ce0-5652-48f5-a270-f4eab1268bd1-utilities" (OuterVolumeSpecName: "utilities") pod "d7e63ce0-5652-48f5-a270-f4eab1268bd1" (UID: "d7e63ce0-5652-48f5-a270-f4eab1268bd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.412738 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e63ce0-5652-48f5-a270-f4eab1268bd1-kube-api-access-m4gsd" (OuterVolumeSpecName: "kube-api-access-m4gsd") pod "d7e63ce0-5652-48f5-a270-f4eab1268bd1" (UID: "d7e63ce0-5652-48f5-a270-f4eab1268bd1"). InnerVolumeSpecName "kube-api-access-m4gsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.419205 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e63ce0-5652-48f5-a270-f4eab1268bd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7e63ce0-5652-48f5-a270-f4eab1268bd1" (UID: "d7e63ce0-5652-48f5-a270-f4eab1268bd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.502252 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e63ce0-5652-48f5-a270-f4eab1268bd1-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.502295 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e63ce0-5652-48f5-a270-f4eab1268bd1-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.502312 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4gsd\" (UniqueName: \"kubernetes.io/projected/d7e63ce0-5652-48f5-a270-f4eab1268bd1-kube-api-access-m4gsd\") on node \"crc\" DevicePath \"\"" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.847503 4769 generic.go:334] "Generic (PLEG): container finished" podID="d7e63ce0-5652-48f5-a270-f4eab1268bd1" containerID="9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d" exitCode=0 Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.847568 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvckt" event={"ID":"d7e63ce0-5652-48f5-a270-f4eab1268bd1","Type":"ContainerDied","Data":"9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d"} Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.847602 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvckt" event={"ID":"d7e63ce0-5652-48f5-a270-f4eab1268bd1","Type":"ContainerDied","Data":"82142b011a3a5acf590c5c26323e73a6e2830d76656f729a1c3ec28dccfe8fbb"} Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.847629 4769 scope.go:117] "RemoveContainer" containerID="9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.847671 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvckt" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.869662 4769 scope.go:117] "RemoveContainer" containerID="0d46c7d720e6b0bac9dfe7b5db98ec757bff643b68d05b16061ba75cafb6de1a" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.889164 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvckt"] Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.893806 4769 scope.go:117] "RemoveContainer" containerID="9908793846b2646108882d7907a7d3b36150cdc312ba967b61ecb8e5f877a694" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.899605 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvckt"] Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.958919 4769 scope.go:117] "RemoveContainer" containerID="9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d" Oct 06 07:51:20 crc kubenswrapper[4769]: E1006 07:51:20.959667 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d\": container with ID starting with 9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d not found: ID does not exist" containerID="9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.959718 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d"} err="failed to get container status \"9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d\": rpc error: code = NotFound desc = could not find container \"9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d\": container with ID starting with 9e3f60215660e65e3e7e9fcdc6d78b1aecf410625c2c7db3c8e42425a067199d not found: ID does not exist" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.959749 4769 scope.go:117] "RemoveContainer" containerID="0d46c7d720e6b0bac9dfe7b5db98ec757bff643b68d05b16061ba75cafb6de1a" Oct 06 07:51:20 crc kubenswrapper[4769]: E1006 07:51:20.960048 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d46c7d720e6b0bac9dfe7b5db98ec757bff643b68d05b16061ba75cafb6de1a\": container with ID starting with 0d46c7d720e6b0bac9dfe7b5db98ec757bff643b68d05b16061ba75cafb6de1a not found: ID does not exist" containerID="0d46c7d720e6b0bac9dfe7b5db98ec757bff643b68d05b16061ba75cafb6de1a" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.960085 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d46c7d720e6b0bac9dfe7b5db98ec757bff643b68d05b16061ba75cafb6de1a"} err="failed to get container status \"0d46c7d720e6b0bac9dfe7b5db98ec757bff643b68d05b16061ba75cafb6de1a\": rpc error: code = NotFound desc = could not find container \"0d46c7d720e6b0bac9dfe7b5db98ec757bff643b68d05b16061ba75cafb6de1a\": container with ID starting with 0d46c7d720e6b0bac9dfe7b5db98ec757bff643b68d05b16061ba75cafb6de1a not found: ID does not exist" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.960116 4769 scope.go:117] "RemoveContainer" containerID="9908793846b2646108882d7907a7d3b36150cdc312ba967b61ecb8e5f877a694" Oct 06 07:51:20 crc kubenswrapper[4769]: E1006 07:51:20.960374 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9908793846b2646108882d7907a7d3b36150cdc312ba967b61ecb8e5f877a694\": container with ID starting with 9908793846b2646108882d7907a7d3b36150cdc312ba967b61ecb8e5f877a694 not found: ID does not exist" containerID="9908793846b2646108882d7907a7d3b36150cdc312ba967b61ecb8e5f877a694" Oct 06 07:51:20 crc kubenswrapper[4769]: I1006 07:51:20.960398 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9908793846b2646108882d7907a7d3b36150cdc312ba967b61ecb8e5f877a694"} err="failed to get container status \"9908793846b2646108882d7907a7d3b36150cdc312ba967b61ecb8e5f877a694\": rpc error: code = NotFound desc = could not find container \"9908793846b2646108882d7907a7d3b36150cdc312ba967b61ecb8e5f877a694\": container with ID starting with 9908793846b2646108882d7907a7d3b36150cdc312ba967b61ecb8e5f877a694 not found: ID does not exist" Oct 06 07:51:22 crc kubenswrapper[4769]: I1006 07:51:22.180216 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e63ce0-5652-48f5-a270-f4eab1268bd1" path="/var/lib/kubelet/pods/d7e63ce0-5652-48f5-a270-f4eab1268bd1/volumes" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.038455 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q"] Oct 06 07:52:11 crc kubenswrapper[4769]: E1006 07:52:11.039576 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e63ce0-5652-48f5-a270-f4eab1268bd1" containerName="extract-utilities" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.039595 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e63ce0-5652-48f5-a270-f4eab1268bd1" containerName="extract-utilities" Oct 06 07:52:11 crc kubenswrapper[4769]: E1006 07:52:11.039612 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e63ce0-5652-48f5-a270-f4eab1268bd1" containerName="registry-server" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.039620 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e63ce0-5652-48f5-a270-f4eab1268bd1" containerName="registry-server" Oct 06 07:52:11 crc kubenswrapper[4769]: E1006 07:52:11.039652 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e63ce0-5652-48f5-a270-f4eab1268bd1" containerName="extract-content" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.039659 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e63ce0-5652-48f5-a270-f4eab1268bd1" containerName="extract-content" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.039861 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e63ce0-5652-48f5-a270-f4eab1268bd1" containerName="registry-server" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.040688 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.043233 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.043274 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.045580 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.047237 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6hsvg" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.054090 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q"] Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.158076 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcb6s\" (UniqueName: \"kubernetes.io/projected/a3ccce16-cd72-46c0-ab4f-546d83bf38db-kube-api-access-vcb6s\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.158194 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.158261 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.158286 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.259714 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.259794 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.259992 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcb6s\" (UniqueName: \"kubernetes.io/projected/a3ccce16-cd72-46c0-ab4f-546d83bf38db-kube-api-access-vcb6s\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.260114 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.280716 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.285471 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcb6s\" (UniqueName: \"kubernetes.io/projected/a3ccce16-cd72-46c0-ab4f-546d83bf38db-kube-api-access-vcb6s\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.287563 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.293174 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.400828 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:11 crc kubenswrapper[4769]: I1006 07:52:11.902771 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q"] Oct 06 07:52:12 crc kubenswrapper[4769]: I1006 07:52:12.356260 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" event={"ID":"a3ccce16-cd72-46c0-ab4f-546d83bf38db","Type":"ContainerStarted","Data":"b34ad960ff12a90571bf3ad232878dcde24319b9f55bb441c0ede69f724b682c"} Oct 06 07:52:13 crc kubenswrapper[4769]: I1006 07:52:13.371636 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" event={"ID":"a3ccce16-cd72-46c0-ab4f-546d83bf38db","Type":"ContainerStarted","Data":"db3381b0bd4a7f6c22b3227e7828efe5d633acfb62447c020a4930ca98dfe759"} Oct 06 07:52:13 crc kubenswrapper[4769]: I1006 07:52:13.398232 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" podStartSLOduration=1.895509948 podStartE2EDuration="2.398208637s" podCreationTimestamp="2025-10-06 07:52:11 +0000 UTC" firstStartedPulling="2025-10-06 07:52:11.911523194 +0000 UTC m=+2128.435804341" lastFinishedPulling="2025-10-06 07:52:12.414221873 +0000 UTC m=+2128.938503030" observedRunningTime="2025-10-06 07:52:13.392178432 +0000 UTC m=+2129.916459669" watchObservedRunningTime="2025-10-06 07:52:13.398208637 +0000 UTC m=+2129.922489784" Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.124003 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-55d88"] Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.127980 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.151535 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-55d88"] Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.275086 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-catalog-content\") pod \"community-operators-55d88\" (UID: \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\") " pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.275641 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nllf\" (UniqueName: \"kubernetes.io/projected/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-kube-api-access-2nllf\") pod \"community-operators-55d88\" (UID: \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\") " pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.275818 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-utilities\") pod \"community-operators-55d88\" (UID: \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\") " pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.377713 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-utilities\") pod \"community-operators-55d88\" (UID: \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\") " pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.377822 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-catalog-content\") pod \"community-operators-55d88\" (UID: \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\") " pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.377938 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nllf\" (UniqueName: \"kubernetes.io/projected/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-kube-api-access-2nllf\") pod \"community-operators-55d88\" (UID: \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\") " pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.378690 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-utilities\") pod \"community-operators-55d88\" (UID: \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\") " pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.379342 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-catalog-content\") pod \"community-operators-55d88\" (UID: \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\") " pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.400007 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nllf\" (UniqueName: \"kubernetes.io/projected/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-kube-api-access-2nllf\") pod \"community-operators-55d88\" (UID: \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\") " pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.449631 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:20 crc kubenswrapper[4769]: I1006 07:52:20.975777 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-55d88"] Oct 06 07:52:21 crc kubenswrapper[4769]: I1006 07:52:21.451218 4769 generic.go:334] "Generic (PLEG): container finished" podID="8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" containerID="4319b29a451ad7306ea4bb8af5d38e1280fb7dd1c398300358cf5de196165233" exitCode=0 Oct 06 07:52:21 crc kubenswrapper[4769]: I1006 07:52:21.451285 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55d88" event={"ID":"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2","Type":"ContainerDied","Data":"4319b29a451ad7306ea4bb8af5d38e1280fb7dd1c398300358cf5de196165233"} Oct 06 07:52:21 crc kubenswrapper[4769]: I1006 07:52:21.451604 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55d88" event={"ID":"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2","Type":"ContainerStarted","Data":"0321a019d900f3e23d16c958cdb9b47f180dddfeba45ca3f67d1028228aecd56"} Oct 06 07:52:22 crc kubenswrapper[4769]: I1006 07:52:22.245065 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:52:22 crc kubenswrapper[4769]: I1006 07:52:22.245459 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:52:22 crc kubenswrapper[4769]: I1006 07:52:22.460345 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55d88" event={"ID":"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2","Type":"ContainerStarted","Data":"dd61c1f4fb3588ea04891ccb992811214d856c860f005d6e6b142f9fd7399fd6"} Oct 06 07:52:23 crc kubenswrapper[4769]: I1006 07:52:23.468202 4769 generic.go:334] "Generic (PLEG): container finished" podID="8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" containerID="dd61c1f4fb3588ea04891ccb992811214d856c860f005d6e6b142f9fd7399fd6" exitCode=0 Oct 06 07:52:23 crc kubenswrapper[4769]: I1006 07:52:23.468354 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55d88" event={"ID":"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2","Type":"ContainerDied","Data":"dd61c1f4fb3588ea04891ccb992811214d856c860f005d6e6b142f9fd7399fd6"} Oct 06 07:52:24 crc kubenswrapper[4769]: I1006 07:52:24.480119 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55d88" event={"ID":"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2","Type":"ContainerStarted","Data":"e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7"} Oct 06 07:52:24 crc kubenswrapper[4769]: I1006 07:52:24.500612 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-55d88" podStartSLOduration=1.8149106320000001 podStartE2EDuration="4.500594217s" podCreationTimestamp="2025-10-06 07:52:20 +0000 UTC" firstStartedPulling="2025-10-06 07:52:21.453652534 +0000 UTC m=+2137.977933691" lastFinishedPulling="2025-10-06 07:52:24.139336129 +0000 UTC m=+2140.663617276" observedRunningTime="2025-10-06 07:52:24.499062955 +0000 UTC m=+2141.023344122" watchObservedRunningTime="2025-10-06 07:52:24.500594217 +0000 UTC m=+2141.024875374" Oct 06 07:52:24 crc kubenswrapper[4769]: I1006 07:52:24.902101 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9tfxq"] Oct 06 07:52:24 crc kubenswrapper[4769]: I1006 07:52:24.904153 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:24 crc kubenswrapper[4769]: I1006 07:52:24.914477 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9tfxq"] Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.063291 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d0ed12d-d356-415f-82d9-b960bfddc92f-utilities\") pod \"redhat-operators-9tfxq\" (UID: \"8d0ed12d-d356-415f-82d9-b960bfddc92f\") " pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.063342 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d0ed12d-d356-415f-82d9-b960bfddc92f-catalog-content\") pod \"redhat-operators-9tfxq\" (UID: \"8d0ed12d-d356-415f-82d9-b960bfddc92f\") " pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.063573 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtlrd\" (UniqueName: \"kubernetes.io/projected/8d0ed12d-d356-415f-82d9-b960bfddc92f-kube-api-access-dtlrd\") pod \"redhat-operators-9tfxq\" (UID: \"8d0ed12d-d356-415f-82d9-b960bfddc92f\") " pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.094011 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xdx69"] Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.095997 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.105385 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xdx69"] Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.165631 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtlrd\" (UniqueName: \"kubernetes.io/projected/8d0ed12d-d356-415f-82d9-b960bfddc92f-kube-api-access-dtlrd\") pod \"redhat-operators-9tfxq\" (UID: \"8d0ed12d-d356-415f-82d9-b960bfddc92f\") " pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.165698 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d0ed12d-d356-415f-82d9-b960bfddc92f-utilities\") pod \"redhat-operators-9tfxq\" (UID: \"8d0ed12d-d356-415f-82d9-b960bfddc92f\") " pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.165725 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d0ed12d-d356-415f-82d9-b960bfddc92f-catalog-content\") pod \"redhat-operators-9tfxq\" (UID: \"8d0ed12d-d356-415f-82d9-b960bfddc92f\") " pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.166264 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d0ed12d-d356-415f-82d9-b960bfddc92f-catalog-content\") pod \"redhat-operators-9tfxq\" (UID: \"8d0ed12d-d356-415f-82d9-b960bfddc92f\") " pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.166406 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d0ed12d-d356-415f-82d9-b960bfddc92f-utilities\") pod \"redhat-operators-9tfxq\" (UID: \"8d0ed12d-d356-415f-82d9-b960bfddc92f\") " pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.197728 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtlrd\" (UniqueName: \"kubernetes.io/projected/8d0ed12d-d356-415f-82d9-b960bfddc92f-kube-api-access-dtlrd\") pod \"redhat-operators-9tfxq\" (UID: \"8d0ed12d-d356-415f-82d9-b960bfddc92f\") " pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.234336 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.270908 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-catalog-content\") pod \"certified-operators-xdx69\" (UID: \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\") " pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.270984 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-utilities\") pod \"certified-operators-xdx69\" (UID: \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\") " pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.271076 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc76x\" (UniqueName: \"kubernetes.io/projected/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-kube-api-access-lc76x\") pod \"certified-operators-xdx69\" (UID: \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\") " pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.380876 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-catalog-content\") pod \"certified-operators-xdx69\" (UID: \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\") " pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.380932 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-utilities\") pod \"certified-operators-xdx69\" (UID: \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\") " pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.380970 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc76x\" (UniqueName: \"kubernetes.io/projected/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-kube-api-access-lc76x\") pod \"certified-operators-xdx69\" (UID: \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\") " pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.384528 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-catalog-content\") pod \"certified-operators-xdx69\" (UID: \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\") " pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.384933 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-utilities\") pod \"certified-operators-xdx69\" (UID: \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\") " pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.411622 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc76x\" (UniqueName: \"kubernetes.io/projected/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-kube-api-access-lc76x\") pod \"certified-operators-xdx69\" (UID: \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\") " pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.424935 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:25 crc kubenswrapper[4769]: W1006 07:52:25.745947 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d0ed12d_d356_415f_82d9_b960bfddc92f.slice/crio-5a2722e45d1f4a709fbc17b7852d11f8e4db45c99e7a87ad1ba3d6e0f9c23fde WatchSource:0}: Error finding container 5a2722e45d1f4a709fbc17b7852d11f8e4db45c99e7a87ad1ba3d6e0f9c23fde: Status 404 returned error can't find the container with id 5a2722e45d1f4a709fbc17b7852d11f8e4db45c99e7a87ad1ba3d6e0f9c23fde Oct 06 07:52:25 crc kubenswrapper[4769]: I1006 07:52:25.749595 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9tfxq"] Oct 06 07:52:26 crc kubenswrapper[4769]: I1006 07:52:26.052978 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xdx69"] Oct 06 07:52:26 crc kubenswrapper[4769]: W1006 07:52:26.073246 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4f82aa2_1f96_4aa9_aa86_69ef448fe002.slice/crio-51889e374a51094883bedd38535dd3e1a39e59d512f85436e88b6859cdd38464 WatchSource:0}: Error finding container 51889e374a51094883bedd38535dd3e1a39e59d512f85436e88b6859cdd38464: Status 404 returned error can't find the container with id 51889e374a51094883bedd38535dd3e1a39e59d512f85436e88b6859cdd38464 Oct 06 07:52:26 crc kubenswrapper[4769]: I1006 07:52:26.554444 4769 generic.go:334] "Generic (PLEG): container finished" podID="e4f82aa2-1f96-4aa9-aa86-69ef448fe002" containerID="aba43b6570ad4f9f12fb2ab656f68cfbfeb0ffa55e7486f001f60a7a1b27ad29" exitCode=0 Oct 06 07:52:26 crc kubenswrapper[4769]: I1006 07:52:26.554522 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdx69" event={"ID":"e4f82aa2-1f96-4aa9-aa86-69ef448fe002","Type":"ContainerDied","Data":"aba43b6570ad4f9f12fb2ab656f68cfbfeb0ffa55e7486f001f60a7a1b27ad29"} Oct 06 07:52:26 crc kubenswrapper[4769]: I1006 07:52:26.554554 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdx69" event={"ID":"e4f82aa2-1f96-4aa9-aa86-69ef448fe002","Type":"ContainerStarted","Data":"51889e374a51094883bedd38535dd3e1a39e59d512f85436e88b6859cdd38464"} Oct 06 07:52:26 crc kubenswrapper[4769]: I1006 07:52:26.559724 4769 generic.go:334] "Generic (PLEG): container finished" podID="8d0ed12d-d356-415f-82d9-b960bfddc92f" containerID="9d123d81cf9b0b8552df5da8f1d87e45fc27e7d26e61e01e64a45d021dd63cf6" exitCode=0 Oct 06 07:52:26 crc kubenswrapper[4769]: I1006 07:52:26.559768 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tfxq" event={"ID":"8d0ed12d-d356-415f-82d9-b960bfddc92f","Type":"ContainerDied","Data":"9d123d81cf9b0b8552df5da8f1d87e45fc27e7d26e61e01e64a45d021dd63cf6"} Oct 06 07:52:26 crc kubenswrapper[4769]: I1006 07:52:26.559793 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tfxq" event={"ID":"8d0ed12d-d356-415f-82d9-b960bfddc92f","Type":"ContainerStarted","Data":"5a2722e45d1f4a709fbc17b7852d11f8e4db45c99e7a87ad1ba3d6e0f9c23fde"} Oct 06 07:52:27 crc kubenswrapper[4769]: I1006 07:52:27.573153 4769 generic.go:334] "Generic (PLEG): container finished" podID="e4f82aa2-1f96-4aa9-aa86-69ef448fe002" containerID="a5ada748403c5df3e6cb974e179f0a4c64756b08d36329a83358ea312a49b6be" exitCode=0 Oct 06 07:52:27 crc kubenswrapper[4769]: I1006 07:52:27.573394 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdx69" event={"ID":"e4f82aa2-1f96-4aa9-aa86-69ef448fe002","Type":"ContainerDied","Data":"a5ada748403c5df3e6cb974e179f0a4c64756b08d36329a83358ea312a49b6be"} Oct 06 07:52:27 crc kubenswrapper[4769]: I1006 07:52:27.582013 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tfxq" event={"ID":"8d0ed12d-d356-415f-82d9-b960bfddc92f","Type":"ContainerStarted","Data":"8282441ff28a51a8186f967836a4ac872d4641ddba8cf68decb86b38e0c7e564"} Oct 06 07:52:28 crc kubenswrapper[4769]: I1006 07:52:28.592064 4769 generic.go:334] "Generic (PLEG): container finished" podID="8d0ed12d-d356-415f-82d9-b960bfddc92f" containerID="8282441ff28a51a8186f967836a4ac872d4641ddba8cf68decb86b38e0c7e564" exitCode=0 Oct 06 07:52:28 crc kubenswrapper[4769]: I1006 07:52:28.592158 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tfxq" event={"ID":"8d0ed12d-d356-415f-82d9-b960bfddc92f","Type":"ContainerDied","Data":"8282441ff28a51a8186f967836a4ac872d4641ddba8cf68decb86b38e0c7e564"} Oct 06 07:52:29 crc kubenswrapper[4769]: I1006 07:52:29.602197 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdx69" event={"ID":"e4f82aa2-1f96-4aa9-aa86-69ef448fe002","Type":"ContainerStarted","Data":"7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb"} Oct 06 07:52:29 crc kubenswrapper[4769]: I1006 07:52:29.604834 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tfxq" event={"ID":"8d0ed12d-d356-415f-82d9-b960bfddc92f","Type":"ContainerStarted","Data":"2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53"} Oct 06 07:52:29 crc kubenswrapper[4769]: I1006 07:52:29.625681 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xdx69" podStartSLOduration=2.289918047 podStartE2EDuration="4.625664751s" podCreationTimestamp="2025-10-06 07:52:25 +0000 UTC" firstStartedPulling="2025-10-06 07:52:26.556484891 +0000 UTC m=+2143.080766038" lastFinishedPulling="2025-10-06 07:52:28.892231565 +0000 UTC m=+2145.416512742" observedRunningTime="2025-10-06 07:52:29.622046222 +0000 UTC m=+2146.146327369" watchObservedRunningTime="2025-10-06 07:52:29.625664751 +0000 UTC m=+2146.149945888" Oct 06 07:52:29 crc kubenswrapper[4769]: I1006 07:52:29.647226 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9tfxq" podStartSLOduration=3.207355866 podStartE2EDuration="5.64720473s" podCreationTimestamp="2025-10-06 07:52:24 +0000 UTC" firstStartedPulling="2025-10-06 07:52:26.561283792 +0000 UTC m=+2143.085564939" lastFinishedPulling="2025-10-06 07:52:29.001132646 +0000 UTC m=+2145.525413803" observedRunningTime="2025-10-06 07:52:29.6387759 +0000 UTC m=+2146.163057067" watchObservedRunningTime="2025-10-06 07:52:29.64720473 +0000 UTC m=+2146.171485877" Oct 06 07:52:30 crc kubenswrapper[4769]: I1006 07:52:30.450254 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:30 crc kubenswrapper[4769]: I1006 07:52:30.451780 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:30 crc kubenswrapper[4769]: I1006 07:52:30.545727 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:30 crc kubenswrapper[4769]: I1006 07:52:30.665036 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:33 crc kubenswrapper[4769]: I1006 07:52:33.496673 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-55d88"] Oct 06 07:52:33 crc kubenswrapper[4769]: I1006 07:52:33.657723 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-55d88" podUID="8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" containerName="registry-server" containerID="cri-o://e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7" gracePeriod=2 Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.117349 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.189166 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-utilities\") pod \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\" (UID: \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\") " Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.189199 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nllf\" (UniqueName: \"kubernetes.io/projected/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-kube-api-access-2nllf\") pod \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\" (UID: \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\") " Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.189221 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-catalog-content\") pod \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\" (UID: \"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2\") " Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.190091 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-utilities" (OuterVolumeSpecName: "utilities") pod "8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" (UID: "8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.196786 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-kube-api-access-2nllf" (OuterVolumeSpecName: "kube-api-access-2nllf") pod "8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" (UID: "8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2"). InnerVolumeSpecName "kube-api-access-2nllf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.243725 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" (UID: "8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.291985 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.292023 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nllf\" (UniqueName: \"kubernetes.io/projected/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-kube-api-access-2nllf\") on node \"crc\" DevicePath \"\"" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.292034 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.669180 4769 generic.go:334] "Generic (PLEG): container finished" podID="8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" containerID="e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7" exitCode=0 Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.669231 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55d88" event={"ID":"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2","Type":"ContainerDied","Data":"e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7"} Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.669244 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-55d88" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.669261 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-55d88" event={"ID":"8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2","Type":"ContainerDied","Data":"0321a019d900f3e23d16c958cdb9b47f180dddfeba45ca3f67d1028228aecd56"} Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.669281 4769 scope.go:117] "RemoveContainer" containerID="e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.703510 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-55d88"] Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.709086 4769 scope.go:117] "RemoveContainer" containerID="dd61c1f4fb3588ea04891ccb992811214d856c860f005d6e6b142f9fd7399fd6" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.710297 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-55d88"] Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.729222 4769 scope.go:117] "RemoveContainer" containerID="4319b29a451ad7306ea4bb8af5d38e1280fb7dd1c398300358cf5de196165233" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.768189 4769 scope.go:117] "RemoveContainer" containerID="e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7" Oct 06 07:52:34 crc kubenswrapper[4769]: E1006 07:52:34.768608 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7\": container with ID starting with e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7 not found: ID does not exist" containerID="e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.768643 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7"} err="failed to get container status \"e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7\": rpc error: code = NotFound desc = could not find container \"e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7\": container with ID starting with e74eaa22f64b7a305977a0d4851af23ab0e9f6f9b02875955a3136d7f4bfa5a7 not found: ID does not exist" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.768668 4769 scope.go:117] "RemoveContainer" containerID="dd61c1f4fb3588ea04891ccb992811214d856c860f005d6e6b142f9fd7399fd6" Oct 06 07:52:34 crc kubenswrapper[4769]: E1006 07:52:34.768896 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd61c1f4fb3588ea04891ccb992811214d856c860f005d6e6b142f9fd7399fd6\": container with ID starting with dd61c1f4fb3588ea04891ccb992811214d856c860f005d6e6b142f9fd7399fd6 not found: ID does not exist" containerID="dd61c1f4fb3588ea04891ccb992811214d856c860f005d6e6b142f9fd7399fd6" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.768933 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd61c1f4fb3588ea04891ccb992811214d856c860f005d6e6b142f9fd7399fd6"} err="failed to get container status \"dd61c1f4fb3588ea04891ccb992811214d856c860f005d6e6b142f9fd7399fd6\": rpc error: code = NotFound desc = could not find container \"dd61c1f4fb3588ea04891ccb992811214d856c860f005d6e6b142f9fd7399fd6\": container with ID starting with dd61c1f4fb3588ea04891ccb992811214d856c860f005d6e6b142f9fd7399fd6 not found: ID does not exist" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.768951 4769 scope.go:117] "RemoveContainer" containerID="4319b29a451ad7306ea4bb8af5d38e1280fb7dd1c398300358cf5de196165233" Oct 06 07:52:34 crc kubenswrapper[4769]: E1006 07:52:34.769292 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4319b29a451ad7306ea4bb8af5d38e1280fb7dd1c398300358cf5de196165233\": container with ID starting with 4319b29a451ad7306ea4bb8af5d38e1280fb7dd1c398300358cf5de196165233 not found: ID does not exist" containerID="4319b29a451ad7306ea4bb8af5d38e1280fb7dd1c398300358cf5de196165233" Oct 06 07:52:34 crc kubenswrapper[4769]: I1006 07:52:34.769312 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4319b29a451ad7306ea4bb8af5d38e1280fb7dd1c398300358cf5de196165233"} err="failed to get container status \"4319b29a451ad7306ea4bb8af5d38e1280fb7dd1c398300358cf5de196165233\": rpc error: code = NotFound desc = could not find container \"4319b29a451ad7306ea4bb8af5d38e1280fb7dd1c398300358cf5de196165233\": container with ID starting with 4319b29a451ad7306ea4bb8af5d38e1280fb7dd1c398300358cf5de196165233 not found: ID does not exist" Oct 06 07:52:35 crc kubenswrapper[4769]: I1006 07:52:35.234521 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:35 crc kubenswrapper[4769]: I1006 07:52:35.235368 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:35 crc kubenswrapper[4769]: I1006 07:52:35.284171 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:35 crc kubenswrapper[4769]: I1006 07:52:35.426507 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:35 crc kubenswrapper[4769]: I1006 07:52:35.426825 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:35 crc kubenswrapper[4769]: I1006 07:52:35.489064 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:35 crc kubenswrapper[4769]: I1006 07:52:35.736914 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:35 crc kubenswrapper[4769]: I1006 07:52:35.747791 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:36 crc kubenswrapper[4769]: I1006 07:52:36.178920 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" path="/var/lib/kubelet/pods/8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2/volumes" Oct 06 07:52:37 crc kubenswrapper[4769]: I1006 07:52:37.694231 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9tfxq"] Oct 06 07:52:38 crc kubenswrapper[4769]: I1006 07:52:38.285671 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xdx69"] Oct 06 07:52:38 crc kubenswrapper[4769]: I1006 07:52:38.714077 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xdx69" podUID="e4f82aa2-1f96-4aa9-aa86-69ef448fe002" containerName="registry-server" containerID="cri-o://7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb" gracePeriod=2 Oct 06 07:52:38 crc kubenswrapper[4769]: I1006 07:52:38.714212 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9tfxq" podUID="8d0ed12d-d356-415f-82d9-b960bfddc92f" containerName="registry-server" containerID="cri-o://2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53" gracePeriod=2 Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.253736 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.281866 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d0ed12d-d356-415f-82d9-b960bfddc92f-catalog-content\") pod \"8d0ed12d-d356-415f-82d9-b960bfddc92f\" (UID: \"8d0ed12d-d356-415f-82d9-b960bfddc92f\") " Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.281950 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtlrd\" (UniqueName: \"kubernetes.io/projected/8d0ed12d-d356-415f-82d9-b960bfddc92f-kube-api-access-dtlrd\") pod \"8d0ed12d-d356-415f-82d9-b960bfddc92f\" (UID: \"8d0ed12d-d356-415f-82d9-b960bfddc92f\") " Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.282031 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d0ed12d-d356-415f-82d9-b960bfddc92f-utilities\") pod \"8d0ed12d-d356-415f-82d9-b960bfddc92f\" (UID: \"8d0ed12d-d356-415f-82d9-b960bfddc92f\") " Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.282849 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d0ed12d-d356-415f-82d9-b960bfddc92f-utilities" (OuterVolumeSpecName: "utilities") pod "8d0ed12d-d356-415f-82d9-b960bfddc92f" (UID: "8d0ed12d-d356-415f-82d9-b960bfddc92f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.287718 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d0ed12d-d356-415f-82d9-b960bfddc92f-kube-api-access-dtlrd" (OuterVolumeSpecName: "kube-api-access-dtlrd") pod "8d0ed12d-d356-415f-82d9-b960bfddc92f" (UID: "8d0ed12d-d356-415f-82d9-b960bfddc92f"). InnerVolumeSpecName "kube-api-access-dtlrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.361222 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.383528 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc76x\" (UniqueName: \"kubernetes.io/projected/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-kube-api-access-lc76x\") pod \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\" (UID: \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\") " Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.383620 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-catalog-content\") pod \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\" (UID: \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\") " Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.383653 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-utilities\") pod \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\" (UID: \"e4f82aa2-1f96-4aa9-aa86-69ef448fe002\") " Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.384035 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtlrd\" (UniqueName: \"kubernetes.io/projected/8d0ed12d-d356-415f-82d9-b960bfddc92f-kube-api-access-dtlrd\") on node \"crc\" DevicePath \"\"" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.384051 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d0ed12d-d356-415f-82d9-b960bfddc92f-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.384588 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-utilities" (OuterVolumeSpecName: "utilities") pod "e4f82aa2-1f96-4aa9-aa86-69ef448fe002" (UID: "e4f82aa2-1f96-4aa9-aa86-69ef448fe002"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.386760 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-kube-api-access-lc76x" (OuterVolumeSpecName: "kube-api-access-lc76x") pod "e4f82aa2-1f96-4aa9-aa86-69ef448fe002" (UID: "e4f82aa2-1f96-4aa9-aa86-69ef448fe002"). InnerVolumeSpecName "kube-api-access-lc76x". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.396267 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d0ed12d-d356-415f-82d9-b960bfddc92f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d0ed12d-d356-415f-82d9-b960bfddc92f" (UID: "8d0ed12d-d356-415f-82d9-b960bfddc92f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.429398 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4f82aa2-1f96-4aa9-aa86-69ef448fe002" (UID: "e4f82aa2-1f96-4aa9-aa86-69ef448fe002"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.485372 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.485412 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc76x\" (UniqueName: \"kubernetes.io/projected/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-kube-api-access-lc76x\") on node \"crc\" DevicePath \"\"" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.485441 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d0ed12d-d356-415f-82d9-b960bfddc92f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.485452 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4f82aa2-1f96-4aa9-aa86-69ef448fe002-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.727203 4769 generic.go:334] "Generic (PLEG): container finished" podID="8d0ed12d-d356-415f-82d9-b960bfddc92f" containerID="2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53" exitCode=0 Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.727255 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tfxq" event={"ID":"8d0ed12d-d356-415f-82d9-b960bfddc92f","Type":"ContainerDied","Data":"2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53"} Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.727284 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tfxq" event={"ID":"8d0ed12d-d356-415f-82d9-b960bfddc92f","Type":"ContainerDied","Data":"5a2722e45d1f4a709fbc17b7852d11f8e4db45c99e7a87ad1ba3d6e0f9c23fde"} Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.727304 4769 scope.go:117] "RemoveContainer" containerID="2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.727389 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tfxq" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.731100 4769 generic.go:334] "Generic (PLEG): container finished" podID="e4f82aa2-1f96-4aa9-aa86-69ef448fe002" containerID="7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb" exitCode=0 Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.731151 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdx69" event={"ID":"e4f82aa2-1f96-4aa9-aa86-69ef448fe002","Type":"ContainerDied","Data":"7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb"} Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.731185 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xdx69" event={"ID":"e4f82aa2-1f96-4aa9-aa86-69ef448fe002","Type":"ContainerDied","Data":"51889e374a51094883bedd38535dd3e1a39e59d512f85436e88b6859cdd38464"} Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.731282 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xdx69" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.764540 4769 scope.go:117] "RemoveContainer" containerID="8282441ff28a51a8186f967836a4ac872d4641ddba8cf68decb86b38e0c7e564" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.769928 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9tfxq"] Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.782031 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9tfxq"] Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.798399 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xdx69"] Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.818569 4769 scope.go:117] "RemoveContainer" containerID="9d123d81cf9b0b8552df5da8f1d87e45fc27e7d26e61e01e64a45d021dd63cf6" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.819027 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xdx69"] Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.835929 4769 scope.go:117] "RemoveContainer" containerID="2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53" Oct 06 07:52:39 crc kubenswrapper[4769]: E1006 07:52:39.836545 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53\": container with ID starting with 2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53 not found: ID does not exist" containerID="2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.836615 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53"} err="failed to get container status \"2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53\": rpc error: code = NotFound desc = could not find container \"2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53\": container with ID starting with 2a93166af382e8fec28779f38b918d579afec962480472577358cfeead749d53 not found: ID does not exist" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.836661 4769 scope.go:117] "RemoveContainer" containerID="8282441ff28a51a8186f967836a4ac872d4641ddba8cf68decb86b38e0c7e564" Oct 06 07:52:39 crc kubenswrapper[4769]: E1006 07:52:39.837083 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8282441ff28a51a8186f967836a4ac872d4641ddba8cf68decb86b38e0c7e564\": container with ID starting with 8282441ff28a51a8186f967836a4ac872d4641ddba8cf68decb86b38e0c7e564 not found: ID does not exist" containerID="8282441ff28a51a8186f967836a4ac872d4641ddba8cf68decb86b38e0c7e564" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.837138 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8282441ff28a51a8186f967836a4ac872d4641ddba8cf68decb86b38e0c7e564"} err="failed to get container status \"8282441ff28a51a8186f967836a4ac872d4641ddba8cf68decb86b38e0c7e564\": rpc error: code = NotFound desc = could not find container \"8282441ff28a51a8186f967836a4ac872d4641ddba8cf68decb86b38e0c7e564\": container with ID starting with 8282441ff28a51a8186f967836a4ac872d4641ddba8cf68decb86b38e0c7e564 not found: ID does not exist" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.837158 4769 scope.go:117] "RemoveContainer" containerID="9d123d81cf9b0b8552df5da8f1d87e45fc27e7d26e61e01e64a45d021dd63cf6" Oct 06 07:52:39 crc kubenswrapper[4769]: E1006 07:52:39.837462 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d123d81cf9b0b8552df5da8f1d87e45fc27e7d26e61e01e64a45d021dd63cf6\": container with ID starting with 9d123d81cf9b0b8552df5da8f1d87e45fc27e7d26e61e01e64a45d021dd63cf6 not found: ID does not exist" containerID="9d123d81cf9b0b8552df5da8f1d87e45fc27e7d26e61e01e64a45d021dd63cf6" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.837516 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d123d81cf9b0b8552df5da8f1d87e45fc27e7d26e61e01e64a45d021dd63cf6"} err="failed to get container status \"9d123d81cf9b0b8552df5da8f1d87e45fc27e7d26e61e01e64a45d021dd63cf6\": rpc error: code = NotFound desc = could not find container \"9d123d81cf9b0b8552df5da8f1d87e45fc27e7d26e61e01e64a45d021dd63cf6\": container with ID starting with 9d123d81cf9b0b8552df5da8f1d87e45fc27e7d26e61e01e64a45d021dd63cf6 not found: ID does not exist" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.837529 4769 scope.go:117] "RemoveContainer" containerID="7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.853483 4769 scope.go:117] "RemoveContainer" containerID="a5ada748403c5df3e6cb974e179f0a4c64756b08d36329a83358ea312a49b6be" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.911238 4769 scope.go:117] "RemoveContainer" containerID="aba43b6570ad4f9f12fb2ab656f68cfbfeb0ffa55e7486f001f60a7a1b27ad29" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.945685 4769 scope.go:117] "RemoveContainer" containerID="7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb" Oct 06 07:52:39 crc kubenswrapper[4769]: E1006 07:52:39.946220 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb\": container with ID starting with 7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb not found: ID does not exist" containerID="7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.946257 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb"} err="failed to get container status \"7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb\": rpc error: code = NotFound desc = could not find container \"7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb\": container with ID starting with 7b18c43b1bc4b66bcc5226d7b055aac27ecba5b548ee6a139505f6a7277e1adb not found: ID does not exist" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.946283 4769 scope.go:117] "RemoveContainer" containerID="a5ada748403c5df3e6cb974e179f0a4c64756b08d36329a83358ea312a49b6be" Oct 06 07:52:39 crc kubenswrapper[4769]: E1006 07:52:39.946879 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ada748403c5df3e6cb974e179f0a4c64756b08d36329a83358ea312a49b6be\": container with ID starting with a5ada748403c5df3e6cb974e179f0a4c64756b08d36329a83358ea312a49b6be not found: ID does not exist" containerID="a5ada748403c5df3e6cb974e179f0a4c64756b08d36329a83358ea312a49b6be" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.946907 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ada748403c5df3e6cb974e179f0a4c64756b08d36329a83358ea312a49b6be"} err="failed to get container status \"a5ada748403c5df3e6cb974e179f0a4c64756b08d36329a83358ea312a49b6be\": rpc error: code = NotFound desc = could not find container \"a5ada748403c5df3e6cb974e179f0a4c64756b08d36329a83358ea312a49b6be\": container with ID starting with a5ada748403c5df3e6cb974e179f0a4c64756b08d36329a83358ea312a49b6be not found: ID does not exist" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.946924 4769 scope.go:117] "RemoveContainer" containerID="aba43b6570ad4f9f12fb2ab656f68cfbfeb0ffa55e7486f001f60a7a1b27ad29" Oct 06 07:52:39 crc kubenswrapper[4769]: E1006 07:52:39.947290 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba43b6570ad4f9f12fb2ab656f68cfbfeb0ffa55e7486f001f60a7a1b27ad29\": container with ID starting with aba43b6570ad4f9f12fb2ab656f68cfbfeb0ffa55e7486f001f60a7a1b27ad29 not found: ID does not exist" containerID="aba43b6570ad4f9f12fb2ab656f68cfbfeb0ffa55e7486f001f60a7a1b27ad29" Oct 06 07:52:39 crc kubenswrapper[4769]: I1006 07:52:39.947314 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba43b6570ad4f9f12fb2ab656f68cfbfeb0ffa55e7486f001f60a7a1b27ad29"} err="failed to get container status \"aba43b6570ad4f9f12fb2ab656f68cfbfeb0ffa55e7486f001f60a7a1b27ad29\": rpc error: code = NotFound desc = could not find container \"aba43b6570ad4f9f12fb2ab656f68cfbfeb0ffa55e7486f001f60a7a1b27ad29\": container with ID starting with aba43b6570ad4f9f12fb2ab656f68cfbfeb0ffa55e7486f001f60a7a1b27ad29 not found: ID does not exist" Oct 06 07:52:40 crc kubenswrapper[4769]: I1006 07:52:40.182071 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d0ed12d-d356-415f-82d9-b960bfddc92f" path="/var/lib/kubelet/pods/8d0ed12d-d356-415f-82d9-b960bfddc92f/volumes" Oct 06 07:52:40 crc kubenswrapper[4769]: I1006 07:52:40.183499 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4f82aa2-1f96-4aa9-aa86-69ef448fe002" path="/var/lib/kubelet/pods/e4f82aa2-1f96-4aa9-aa86-69ef448fe002/volumes" Oct 06 07:52:52 crc kubenswrapper[4769]: I1006 07:52:52.244937 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:52:52 crc kubenswrapper[4769]: I1006 07:52:52.245385 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:52:52 crc kubenswrapper[4769]: I1006 07:52:52.902896 4769 generic.go:334] "Generic (PLEG): container finished" podID="a3ccce16-cd72-46c0-ab4f-546d83bf38db" containerID="db3381b0bd4a7f6c22b3227e7828efe5d633acfb62447c020a4930ca98dfe759" exitCode=2 Oct 06 07:52:52 crc kubenswrapper[4769]: I1006 07:52:52.902975 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" event={"ID":"a3ccce16-cd72-46c0-ab4f-546d83bf38db","Type":"ContainerDied","Data":"db3381b0bd4a7f6c22b3227e7828efe5d633acfb62447c020a4930ca98dfe759"} Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.358240 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.395763 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-ssh-key\") pod \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.397085 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-bootstrap-combined-ca-bundle\") pod \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.397191 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-inventory\") pod \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.397287 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcb6s\" (UniqueName: \"kubernetes.io/projected/a3ccce16-cd72-46c0-ab4f-546d83bf38db-kube-api-access-vcb6s\") pod \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.412803 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a3ccce16-cd72-46c0-ab4f-546d83bf38db" (UID: "a3ccce16-cd72-46c0-ab4f-546d83bf38db"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.416868 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3ccce16-cd72-46c0-ab4f-546d83bf38db-kube-api-access-vcb6s" (OuterVolumeSpecName: "kube-api-access-vcb6s") pod "a3ccce16-cd72-46c0-ab4f-546d83bf38db" (UID: "a3ccce16-cd72-46c0-ab4f-546d83bf38db"). InnerVolumeSpecName "kube-api-access-vcb6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 07:52:54 crc kubenswrapper[4769]: E1006 07:52:54.423953 4769 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-ssh-key podName:a3ccce16-cd72-46c0-ab4f-546d83bf38db nodeName:}" failed. No retries permitted until 2025-10-06 07:52:54.923922937 +0000 UTC m=+2171.448204094 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key" (UniqueName: "kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-ssh-key") pod "a3ccce16-cd72-46c0-ab4f-546d83bf38db" (UID: "a3ccce16-cd72-46c0-ab4f-546d83bf38db") : error deleting /var/lib/kubelet/pods/a3ccce16-cd72-46c0-ab4f-546d83bf38db/volume-subpaths: remove /var/lib/kubelet/pods/a3ccce16-cd72-46c0-ab4f-546d83bf38db/volume-subpaths: no such file or directory Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.428598 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-inventory" (OuterVolumeSpecName: "inventory") pod "a3ccce16-cd72-46c0-ab4f-546d83bf38db" (UID: "a3ccce16-cd72-46c0-ab4f-546d83bf38db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.499921 4769 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.499973 4769 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-inventory\") on node \"crc\" DevicePath \"\"" Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.499984 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcb6s\" (UniqueName: \"kubernetes.io/projected/a3ccce16-cd72-46c0-ab4f-546d83bf38db-kube-api-access-vcb6s\") on node \"crc\" DevicePath \"\"" Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.924214 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" event={"ID":"a3ccce16-cd72-46c0-ab4f-546d83bf38db","Type":"ContainerDied","Data":"b34ad960ff12a90571bf3ad232878dcde24319b9f55bb441c0ede69f724b682c"} Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.924563 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b34ad960ff12a90571bf3ad232878dcde24319b9f55bb441c0ede69f724b682c" Oct 06 07:52:54 crc kubenswrapper[4769]: I1006 07:52:54.924324 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q" Oct 06 07:52:55 crc kubenswrapper[4769]: I1006 07:52:55.010026 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-ssh-key\") pod \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\" (UID: \"a3ccce16-cd72-46c0-ab4f-546d83bf38db\") " Oct 06 07:52:55 crc kubenswrapper[4769]: I1006 07:52:55.017727 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a3ccce16-cd72-46c0-ab4f-546d83bf38db" (UID: "a3ccce16-cd72-46c0-ab4f-546d83bf38db"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 07:52:55 crc kubenswrapper[4769]: I1006 07:52:55.112679 4769 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a3ccce16-cd72-46c0-ab4f-546d83bf38db-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 06 07:53:22 crc kubenswrapper[4769]: I1006 07:53:22.245332 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 07:53:22 crc kubenswrapper[4769]: I1006 07:53:22.245947 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 07:53:22 crc kubenswrapper[4769]: I1006 07:53:22.245999 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 07:53:22 crc kubenswrapper[4769]: I1006 07:53:22.246831 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 07:53:22 crc kubenswrapper[4769]: I1006 07:53:22.246899 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" gracePeriod=600 Oct 06 07:53:22 crc kubenswrapper[4769]: E1006 07:53:22.382296 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:53:23 crc kubenswrapper[4769]: I1006 07:53:23.207792 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" exitCode=0 Oct 06 07:53:23 crc kubenswrapper[4769]: I1006 07:53:23.208006 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871"} Oct 06 07:53:23 crc kubenswrapper[4769]: I1006 07:53:23.208198 4769 scope.go:117] "RemoveContainer" containerID="0f1c10d6a1ba3d360ab6b39112fe6532698257e8836633cb0fbe16d4bfa223b2" Oct 06 07:53:23 crc kubenswrapper[4769]: I1006 07:53:23.208870 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:53:23 crc kubenswrapper[4769]: E1006 07:53:23.209166 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:53:35 crc kubenswrapper[4769]: I1006 07:53:35.165982 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:53:35 crc kubenswrapper[4769]: E1006 07:53:35.166917 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:53:50 crc kubenswrapper[4769]: I1006 07:53:50.167045 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:53:50 crc kubenswrapper[4769]: E1006 07:53:50.168173 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:54:02 crc kubenswrapper[4769]: I1006 07:54:02.166362 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:54:02 crc kubenswrapper[4769]: E1006 07:54:02.167036 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:54:17 crc kubenswrapper[4769]: I1006 07:54:17.166615 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:54:17 crc kubenswrapper[4769]: E1006 07:54:17.167398 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:54:31 crc kubenswrapper[4769]: I1006 07:54:31.167004 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:54:31 crc kubenswrapper[4769]: E1006 07:54:31.167886 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:54:43 crc kubenswrapper[4769]: I1006 07:54:43.166852 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:54:43 crc kubenswrapper[4769]: E1006 07:54:43.167626 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:54:56 crc kubenswrapper[4769]: I1006 07:54:56.166558 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:54:56 crc kubenswrapper[4769]: E1006 07:54:56.167656 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:55:08 crc kubenswrapper[4769]: I1006 07:55:08.165726 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:55:08 crc kubenswrapper[4769]: E1006 07:55:08.166591 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:55:20 crc kubenswrapper[4769]: I1006 07:55:20.165606 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:55:20 crc kubenswrapper[4769]: E1006 07:55:20.166288 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:55:32 crc kubenswrapper[4769]: I1006 07:55:32.166462 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:55:32 crc kubenswrapper[4769]: E1006 07:55:32.167365 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:55:43 crc kubenswrapper[4769]: I1006 07:55:43.167032 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:55:43 crc kubenswrapper[4769]: E1006 07:55:43.167782 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:55:54 crc kubenswrapper[4769]: I1006 07:55:54.171256 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:55:54 crc kubenswrapper[4769]: E1006 07:55:54.172366 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:56:09 crc kubenswrapper[4769]: I1006 07:56:09.166283 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:56:09 crc kubenswrapper[4769]: E1006 07:56:09.167051 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:56:21 crc kubenswrapper[4769]: I1006 07:56:21.166559 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:56:21 crc kubenswrapper[4769]: E1006 07:56:21.167460 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:56:35 crc kubenswrapper[4769]: I1006 07:56:35.166477 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:56:35 crc kubenswrapper[4769]: E1006 07:56:35.167165 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:56:46 crc kubenswrapper[4769]: I1006 07:56:46.166139 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:56:46 crc kubenswrapper[4769]: E1006 07:56:46.166961 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:57:01 crc kubenswrapper[4769]: I1006 07:57:01.166055 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:57:01 crc kubenswrapper[4769]: E1006 07:57:01.166979 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:57:13 crc kubenswrapper[4769]: I1006 07:57:13.168779 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:57:13 crc kubenswrapper[4769]: E1006 07:57:13.169703 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:57:28 crc kubenswrapper[4769]: I1006 07:57:28.166104 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:57:28 crc kubenswrapper[4769]: E1006 07:57:28.166919 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:57:40 crc kubenswrapper[4769]: I1006 07:57:40.165985 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:57:40 crc kubenswrapper[4769]: E1006 07:57:40.167017 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:57:53 crc kubenswrapper[4769]: I1006 07:57:53.165786 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:57:53 crc kubenswrapper[4769]: E1006 07:57:53.166681 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:58:06 crc kubenswrapper[4769]: I1006 07:58:06.166183 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:58:06 crc kubenswrapper[4769]: E1006 07:58:06.166947 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:58:19 crc kubenswrapper[4769]: I1006 07:58:19.166135 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:58:19 crc kubenswrapper[4769]: E1006 07:58:19.167532 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 07:58:30 crc kubenswrapper[4769]: I1006 07:58:30.166888 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 07:58:30 crc kubenswrapper[4769]: I1006 07:58:30.982282 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"999e13f471b67ab7936a89c575090a45fd008e7550add0684f123bf3635044da"} Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.225995 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg"] Oct 06 08:00:00 crc kubenswrapper[4769]: E1006 08:00:00.226857 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0ed12d-d356-415f-82d9-b960bfddc92f" containerName="extract-content" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.226870 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0ed12d-d356-415f-82d9-b960bfddc92f" containerName="extract-content" Oct 06 08:00:00 crc kubenswrapper[4769]: E1006 08:00:00.226881 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0ed12d-d356-415f-82d9-b960bfddc92f" containerName="extract-utilities" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.226888 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0ed12d-d356-415f-82d9-b960bfddc92f" containerName="extract-utilities" Oct 06 08:00:00 crc kubenswrapper[4769]: E1006 08:00:00.226899 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f82aa2-1f96-4aa9-aa86-69ef448fe002" containerName="registry-server" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.226905 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f82aa2-1f96-4aa9-aa86-69ef448fe002" containerName="registry-server" Oct 06 08:00:00 crc kubenswrapper[4769]: E1006 08:00:00.226919 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f82aa2-1f96-4aa9-aa86-69ef448fe002" containerName="extract-content" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.226924 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f82aa2-1f96-4aa9-aa86-69ef448fe002" containerName="extract-content" Oct 06 08:00:00 crc kubenswrapper[4769]: E1006 08:00:00.226931 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" containerName="extract-utilities" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.226937 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" containerName="extract-utilities" Oct 06 08:00:00 crc kubenswrapper[4769]: E1006 08:00:00.226954 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d0ed12d-d356-415f-82d9-b960bfddc92f" containerName="registry-server" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.226959 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d0ed12d-d356-415f-82d9-b960bfddc92f" containerName="registry-server" Oct 06 08:00:00 crc kubenswrapper[4769]: E1006 08:00:00.226975 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" containerName="extract-content" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.226981 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" containerName="extract-content" Oct 06 08:00:00 crc kubenswrapper[4769]: E1006 08:00:00.227002 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4f82aa2-1f96-4aa9-aa86-69ef448fe002" containerName="extract-utilities" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.227008 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4f82aa2-1f96-4aa9-aa86-69ef448fe002" containerName="extract-utilities" Oct 06 08:00:00 crc kubenswrapper[4769]: E1006 08:00:00.227018 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" containerName="registry-server" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.227023 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" containerName="registry-server" Oct 06 08:00:00 crc kubenswrapper[4769]: E1006 08:00:00.227038 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3ccce16-cd72-46c0-ab4f-546d83bf38db" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.227045 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3ccce16-cd72-46c0-ab4f-546d83bf38db" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.227213 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aab705a-4ac5-4ecb-aef1-9bfbe5b4afd2" containerName="registry-server" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.227227 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d0ed12d-d356-415f-82d9-b960bfddc92f" containerName="registry-server" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.227236 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3ccce16-cd72-46c0-ab4f-546d83bf38db" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.227251 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4f82aa2-1f96-4aa9-aa86-69ef448fe002" containerName="registry-server" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.231189 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.235348 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg"] Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.240331 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.240650 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.366040 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ss2z\" (UniqueName: \"kubernetes.io/projected/46926e5d-b1b0-4779-aa80-c66e897833a0-kube-api-access-6ss2z\") pod \"collect-profiles-29328960-gpvzg\" (UID: \"46926e5d-b1b0-4779-aa80-c66e897833a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.366084 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46926e5d-b1b0-4779-aa80-c66e897833a0-config-volume\") pod \"collect-profiles-29328960-gpvzg\" (UID: \"46926e5d-b1b0-4779-aa80-c66e897833a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.366132 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46926e5d-b1b0-4779-aa80-c66e897833a0-secret-volume\") pod \"collect-profiles-29328960-gpvzg\" (UID: \"46926e5d-b1b0-4779-aa80-c66e897833a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.468208 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46926e5d-b1b0-4779-aa80-c66e897833a0-secret-volume\") pod \"collect-profiles-29328960-gpvzg\" (UID: \"46926e5d-b1b0-4779-aa80-c66e897833a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.468345 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ss2z\" (UniqueName: \"kubernetes.io/projected/46926e5d-b1b0-4779-aa80-c66e897833a0-kube-api-access-6ss2z\") pod \"collect-profiles-29328960-gpvzg\" (UID: \"46926e5d-b1b0-4779-aa80-c66e897833a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.468361 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46926e5d-b1b0-4779-aa80-c66e897833a0-config-volume\") pod \"collect-profiles-29328960-gpvzg\" (UID: \"46926e5d-b1b0-4779-aa80-c66e897833a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.469136 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46926e5d-b1b0-4779-aa80-c66e897833a0-config-volume\") pod \"collect-profiles-29328960-gpvzg\" (UID: \"46926e5d-b1b0-4779-aa80-c66e897833a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.477271 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46926e5d-b1b0-4779-aa80-c66e897833a0-secret-volume\") pod \"collect-profiles-29328960-gpvzg\" (UID: \"46926e5d-b1b0-4779-aa80-c66e897833a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.484745 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ss2z\" (UniqueName: \"kubernetes.io/projected/46926e5d-b1b0-4779-aa80-c66e897833a0-kube-api-access-6ss2z\") pod \"collect-profiles-29328960-gpvzg\" (UID: \"46926e5d-b1b0-4779-aa80-c66e897833a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.561176 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:00 crc kubenswrapper[4769]: I1006 08:00:00.995463 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg"] Oct 06 08:00:01 crc kubenswrapper[4769]: I1006 08:00:01.707370 4769 generic.go:334] "Generic (PLEG): container finished" podID="46926e5d-b1b0-4779-aa80-c66e897833a0" containerID="70b8dd2cce55525cc6af51556ef3e46a3abcca25007e56734c244713ad112075" exitCode=0 Oct 06 08:00:01 crc kubenswrapper[4769]: I1006 08:00:01.707466 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" event={"ID":"46926e5d-b1b0-4779-aa80-c66e897833a0","Type":"ContainerDied","Data":"70b8dd2cce55525cc6af51556ef3e46a3abcca25007e56734c244713ad112075"} Oct 06 08:00:01 crc kubenswrapper[4769]: I1006 08:00:01.707808 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" event={"ID":"46926e5d-b1b0-4779-aa80-c66e897833a0","Type":"ContainerStarted","Data":"e6fbd364e65756353e9ddc5ef94d0b5864933dc02e55a673613efc86169fdcc6"} Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.020546 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.117229 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ss2z\" (UniqueName: \"kubernetes.io/projected/46926e5d-b1b0-4779-aa80-c66e897833a0-kube-api-access-6ss2z\") pod \"46926e5d-b1b0-4779-aa80-c66e897833a0\" (UID: \"46926e5d-b1b0-4779-aa80-c66e897833a0\") " Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.117321 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46926e5d-b1b0-4779-aa80-c66e897833a0-secret-volume\") pod \"46926e5d-b1b0-4779-aa80-c66e897833a0\" (UID: \"46926e5d-b1b0-4779-aa80-c66e897833a0\") " Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.117495 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46926e5d-b1b0-4779-aa80-c66e897833a0-config-volume\") pod \"46926e5d-b1b0-4779-aa80-c66e897833a0\" (UID: \"46926e5d-b1b0-4779-aa80-c66e897833a0\") " Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.118459 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46926e5d-b1b0-4779-aa80-c66e897833a0-config-volume" (OuterVolumeSpecName: "config-volume") pod "46926e5d-b1b0-4779-aa80-c66e897833a0" (UID: "46926e5d-b1b0-4779-aa80-c66e897833a0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.122578 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46926e5d-b1b0-4779-aa80-c66e897833a0-kube-api-access-6ss2z" (OuterVolumeSpecName: "kube-api-access-6ss2z") pod "46926e5d-b1b0-4779-aa80-c66e897833a0" (UID: "46926e5d-b1b0-4779-aa80-c66e897833a0"). InnerVolumeSpecName "kube-api-access-6ss2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.125948 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46926e5d-b1b0-4779-aa80-c66e897833a0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "46926e5d-b1b0-4779-aa80-c66e897833a0" (UID: "46926e5d-b1b0-4779-aa80-c66e897833a0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.220004 4769 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/46926e5d-b1b0-4779-aa80-c66e897833a0-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.220032 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46926e5d-b1b0-4779-aa80-c66e897833a0-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.220041 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ss2z\" (UniqueName: \"kubernetes.io/projected/46926e5d-b1b0-4779-aa80-c66e897833a0-kube-api-access-6ss2z\") on node \"crc\" DevicePath \"\"" Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.724791 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" event={"ID":"46926e5d-b1b0-4779-aa80-c66e897833a0","Type":"ContainerDied","Data":"e6fbd364e65756353e9ddc5ef94d0b5864933dc02e55a673613efc86169fdcc6"} Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.724833 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg" Oct 06 08:00:03 crc kubenswrapper[4769]: I1006 08:00:03.724866 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6fbd364e65756353e9ddc5ef94d0b5864933dc02e55a673613efc86169fdcc6" Oct 06 08:00:04 crc kubenswrapper[4769]: I1006 08:00:04.100468 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr"] Oct 06 08:00:04 crc kubenswrapper[4769]: I1006 08:00:04.117458 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328915-spmcr"] Oct 06 08:00:04 crc kubenswrapper[4769]: I1006 08:00:04.176954 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39a765da-9609-4f66-8444-51c13efe3d3c" path="/var/lib/kubelet/pods/39a765da-9609-4f66-8444-51c13efe3d3c/volumes" Oct 06 08:00:52 crc kubenswrapper[4769]: I1006 08:00:52.245292 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:00:52 crc kubenswrapper[4769]: I1006 08:00:52.245984 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:00:57 crc kubenswrapper[4769]: I1006 08:00:57.094906 4769 scope.go:117] "RemoveContainer" containerID="7c3bf6060476df19db1dfd0e9b3c68f0205bc91a69d80093edf7e3d01204e6e1" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.150226 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29328961-lhpgc"] Oct 06 08:01:00 crc kubenswrapper[4769]: E1006 08:01:00.151277 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46926e5d-b1b0-4779-aa80-c66e897833a0" containerName="collect-profiles" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.151295 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="46926e5d-b1b0-4779-aa80-c66e897833a0" containerName="collect-profiles" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.151582 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="46926e5d-b1b0-4779-aa80-c66e897833a0" containerName="collect-profiles" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.152231 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.159565 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29328961-lhpgc"] Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.331045 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-fernet-keys\") pod \"keystone-cron-29328961-lhpgc\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.331104 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-combined-ca-bundle\") pod \"keystone-cron-29328961-lhpgc\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.331156 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-config-data\") pod \"keystone-cron-29328961-lhpgc\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.331179 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf2rx\" (UniqueName: \"kubernetes.io/projected/45fb0d84-9e40-482d-82dc-f2040969398d-kube-api-access-mf2rx\") pod \"keystone-cron-29328961-lhpgc\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.433115 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-fernet-keys\") pod \"keystone-cron-29328961-lhpgc\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.433175 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-combined-ca-bundle\") pod \"keystone-cron-29328961-lhpgc\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.433229 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-config-data\") pod \"keystone-cron-29328961-lhpgc\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.433252 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf2rx\" (UniqueName: \"kubernetes.io/projected/45fb0d84-9e40-482d-82dc-f2040969398d-kube-api-access-mf2rx\") pod \"keystone-cron-29328961-lhpgc\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.439459 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-config-data\") pod \"keystone-cron-29328961-lhpgc\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.440111 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-combined-ca-bundle\") pod \"keystone-cron-29328961-lhpgc\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.444616 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-fernet-keys\") pod \"keystone-cron-29328961-lhpgc\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.451441 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf2rx\" (UniqueName: \"kubernetes.io/projected/45fb0d84-9e40-482d-82dc-f2040969398d-kube-api-access-mf2rx\") pod \"keystone-cron-29328961-lhpgc\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.513602 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:00 crc kubenswrapper[4769]: I1006 08:01:00.917534 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29328961-lhpgc"] Oct 06 08:01:01 crc kubenswrapper[4769]: I1006 08:01:01.192995 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29328961-lhpgc" event={"ID":"45fb0d84-9e40-482d-82dc-f2040969398d","Type":"ContainerStarted","Data":"fe4c3bc2de5ee3e50d6143a9be9e8c29814ce527c34c38943f49b71541563fbb"} Oct 06 08:01:01 crc kubenswrapper[4769]: I1006 08:01:01.193595 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29328961-lhpgc" event={"ID":"45fb0d84-9e40-482d-82dc-f2040969398d","Type":"ContainerStarted","Data":"3b12a1c554d2e1a09e0d36af77b9c90cdae94b401ca89b581c432cb39b61b295"} Oct 06 08:01:01 crc kubenswrapper[4769]: I1006 08:01:01.217022 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29328961-lhpgc" podStartSLOduration=1.217001347 podStartE2EDuration="1.217001347s" podCreationTimestamp="2025-10-06 08:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 08:01:01.210702275 +0000 UTC m=+2657.734983432" watchObservedRunningTime="2025-10-06 08:01:01.217001347 +0000 UTC m=+2657.741282494" Oct 06 08:01:04 crc kubenswrapper[4769]: I1006 08:01:04.227393 4769 generic.go:334] "Generic (PLEG): container finished" podID="45fb0d84-9e40-482d-82dc-f2040969398d" containerID="fe4c3bc2de5ee3e50d6143a9be9e8c29814ce527c34c38943f49b71541563fbb" exitCode=0 Oct 06 08:01:04 crc kubenswrapper[4769]: I1006 08:01:04.227463 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29328961-lhpgc" event={"ID":"45fb0d84-9e40-482d-82dc-f2040969398d","Type":"ContainerDied","Data":"fe4c3bc2de5ee3e50d6143a9be9e8c29814ce527c34c38943f49b71541563fbb"} Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.590205 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.730406 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-config-data\") pod \"45fb0d84-9e40-482d-82dc-f2040969398d\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.730771 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf2rx\" (UniqueName: \"kubernetes.io/projected/45fb0d84-9e40-482d-82dc-f2040969398d-kube-api-access-mf2rx\") pod \"45fb0d84-9e40-482d-82dc-f2040969398d\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.730926 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-combined-ca-bundle\") pod \"45fb0d84-9e40-482d-82dc-f2040969398d\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.731104 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-fernet-keys\") pod \"45fb0d84-9e40-482d-82dc-f2040969398d\" (UID: \"45fb0d84-9e40-482d-82dc-f2040969398d\") " Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.736350 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45fb0d84-9e40-482d-82dc-f2040969398d-kube-api-access-mf2rx" (OuterVolumeSpecName: "kube-api-access-mf2rx") pod "45fb0d84-9e40-482d-82dc-f2040969398d" (UID: "45fb0d84-9e40-482d-82dc-f2040969398d"). InnerVolumeSpecName "kube-api-access-mf2rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.736624 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "45fb0d84-9e40-482d-82dc-f2040969398d" (UID: "45fb0d84-9e40-482d-82dc-f2040969398d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.757282 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45fb0d84-9e40-482d-82dc-f2040969398d" (UID: "45fb0d84-9e40-482d-82dc-f2040969398d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.780538 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-config-data" (OuterVolumeSpecName: "config-data") pod "45fb0d84-9e40-482d-82dc-f2040969398d" (UID: "45fb0d84-9e40-482d-82dc-f2040969398d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.833231 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf2rx\" (UniqueName: \"kubernetes.io/projected/45fb0d84-9e40-482d-82dc-f2040969398d-kube-api-access-mf2rx\") on node \"crc\" DevicePath \"\"" Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.833265 4769 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.833274 4769 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 06 08:01:05 crc kubenswrapper[4769]: I1006 08:01:05.833283 4769 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45fb0d84-9e40-482d-82dc-f2040969398d-config-data\") on node \"crc\" DevicePath \"\"" Oct 06 08:01:06 crc kubenswrapper[4769]: I1006 08:01:06.244942 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29328961-lhpgc" event={"ID":"45fb0d84-9e40-482d-82dc-f2040969398d","Type":"ContainerDied","Data":"3b12a1c554d2e1a09e0d36af77b9c90cdae94b401ca89b581c432cb39b61b295"} Oct 06 08:01:06 crc kubenswrapper[4769]: I1006 08:01:06.244979 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b12a1c554d2e1a09e0d36af77b9c90cdae94b401ca89b581c432cb39b61b295" Oct 06 08:01:06 crc kubenswrapper[4769]: I1006 08:01:06.245044 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29328961-lhpgc" Oct 06 08:01:22 crc kubenswrapper[4769]: I1006 08:01:22.246057 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:01:22 crc kubenswrapper[4769]: I1006 08:01:22.247247 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:01:52 crc kubenswrapper[4769]: I1006 08:01:52.245745 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:01:52 crc kubenswrapper[4769]: I1006 08:01:52.246172 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:01:52 crc kubenswrapper[4769]: I1006 08:01:52.246213 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 08:01:52 crc kubenswrapper[4769]: I1006 08:01:52.246972 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"999e13f471b67ab7936a89c575090a45fd008e7550add0684f123bf3635044da"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 08:01:52 crc kubenswrapper[4769]: I1006 08:01:52.247030 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://999e13f471b67ab7936a89c575090a45fd008e7550add0684f123bf3635044da" gracePeriod=600 Oct 06 08:01:52 crc kubenswrapper[4769]: I1006 08:01:52.641175 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="999e13f471b67ab7936a89c575090a45fd008e7550add0684f123bf3635044da" exitCode=0 Oct 06 08:01:52 crc kubenswrapper[4769]: I1006 08:01:52.641251 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"999e13f471b67ab7936a89c575090a45fd008e7550add0684f123bf3635044da"} Oct 06 08:01:52 crc kubenswrapper[4769]: I1006 08:01:52.641583 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053"} Oct 06 08:01:52 crc kubenswrapper[4769]: I1006 08:01:52.641602 4769 scope.go:117] "RemoveContainer" containerID="91e4dc3e1e8ff46dfec6459345a5030af17fbd4e4bb2b3bda849775e1bb2f871" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.183131 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dxqhx"] Oct 06 08:02:16 crc kubenswrapper[4769]: E1006 08:02:16.184872 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45fb0d84-9e40-482d-82dc-f2040969398d" containerName="keystone-cron" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.184892 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="45fb0d84-9e40-482d-82dc-f2040969398d" containerName="keystone-cron" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.185264 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="45fb0d84-9e40-482d-82dc-f2040969398d" containerName="keystone-cron" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.186812 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxqhx"] Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.186913 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.328218 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/941fd080-e290-4b27-bfa7-e614a837e13d-utilities\") pod \"redhat-marketplace-dxqhx\" (UID: \"941fd080-e290-4b27-bfa7-e614a837e13d\") " pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.328566 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvtzd\" (UniqueName: \"kubernetes.io/projected/941fd080-e290-4b27-bfa7-e614a837e13d-kube-api-access-mvtzd\") pod \"redhat-marketplace-dxqhx\" (UID: \"941fd080-e290-4b27-bfa7-e614a837e13d\") " pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.328724 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/941fd080-e290-4b27-bfa7-e614a837e13d-catalog-content\") pod \"redhat-marketplace-dxqhx\" (UID: \"941fd080-e290-4b27-bfa7-e614a837e13d\") " pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.430761 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/941fd080-e290-4b27-bfa7-e614a837e13d-utilities\") pod \"redhat-marketplace-dxqhx\" (UID: \"941fd080-e290-4b27-bfa7-e614a837e13d\") " pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.430831 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvtzd\" (UniqueName: \"kubernetes.io/projected/941fd080-e290-4b27-bfa7-e614a837e13d-kube-api-access-mvtzd\") pod \"redhat-marketplace-dxqhx\" (UID: \"941fd080-e290-4b27-bfa7-e614a837e13d\") " pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.430914 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/941fd080-e290-4b27-bfa7-e614a837e13d-catalog-content\") pod \"redhat-marketplace-dxqhx\" (UID: \"941fd080-e290-4b27-bfa7-e614a837e13d\") " pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.431570 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/941fd080-e290-4b27-bfa7-e614a837e13d-catalog-content\") pod \"redhat-marketplace-dxqhx\" (UID: \"941fd080-e290-4b27-bfa7-e614a837e13d\") " pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.431670 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/941fd080-e290-4b27-bfa7-e614a837e13d-utilities\") pod \"redhat-marketplace-dxqhx\" (UID: \"941fd080-e290-4b27-bfa7-e614a837e13d\") " pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.450122 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvtzd\" (UniqueName: \"kubernetes.io/projected/941fd080-e290-4b27-bfa7-e614a837e13d-kube-api-access-mvtzd\") pod \"redhat-marketplace-dxqhx\" (UID: \"941fd080-e290-4b27-bfa7-e614a837e13d\") " pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.508559 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:16 crc kubenswrapper[4769]: I1006 08:02:16.923155 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxqhx"] Oct 06 08:02:17 crc kubenswrapper[4769]: I1006 08:02:17.875982 4769 generic.go:334] "Generic (PLEG): container finished" podID="941fd080-e290-4b27-bfa7-e614a837e13d" containerID="838ab83c1ec1ded953dc82925ac9fe1c09100db809968defdf7477dbd707e689" exitCode=0 Oct 06 08:02:17 crc kubenswrapper[4769]: I1006 08:02:17.876042 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxqhx" event={"ID":"941fd080-e290-4b27-bfa7-e614a837e13d","Type":"ContainerDied","Data":"838ab83c1ec1ded953dc82925ac9fe1c09100db809968defdf7477dbd707e689"} Oct 06 08:02:17 crc kubenswrapper[4769]: I1006 08:02:17.876332 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxqhx" event={"ID":"941fd080-e290-4b27-bfa7-e614a837e13d","Type":"ContainerStarted","Data":"cb5dc18a4c65a79004bc4fcf053edcd6a85efef1ea482cec8fdb187d1b8a89c1"} Oct 06 08:02:17 crc kubenswrapper[4769]: I1006 08:02:17.877797 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 08:02:18 crc kubenswrapper[4769]: I1006 08:02:18.889468 4769 generic.go:334] "Generic (PLEG): container finished" podID="941fd080-e290-4b27-bfa7-e614a837e13d" containerID="4d5fcc6c077fb1d7cd466f77b045ee35f5f2cbc47b0389aec5196de321353a9f" exitCode=0 Oct 06 08:02:18 crc kubenswrapper[4769]: I1006 08:02:18.889551 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxqhx" event={"ID":"941fd080-e290-4b27-bfa7-e614a837e13d","Type":"ContainerDied","Data":"4d5fcc6c077fb1d7cd466f77b045ee35f5f2cbc47b0389aec5196de321353a9f"} Oct 06 08:02:19 crc kubenswrapper[4769]: I1006 08:02:19.900298 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxqhx" event={"ID":"941fd080-e290-4b27-bfa7-e614a837e13d","Type":"ContainerStarted","Data":"cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2"} Oct 06 08:02:19 crc kubenswrapper[4769]: I1006 08:02:19.919672 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dxqhx" podStartSLOduration=2.506636587 podStartE2EDuration="3.919657751s" podCreationTimestamp="2025-10-06 08:02:16 +0000 UTC" firstStartedPulling="2025-10-06 08:02:17.877609756 +0000 UTC m=+2734.401890893" lastFinishedPulling="2025-10-06 08:02:19.29063091 +0000 UTC m=+2735.814912057" observedRunningTime="2025-10-06 08:02:19.915297461 +0000 UTC m=+2736.439578608" watchObservedRunningTime="2025-10-06 08:02:19.919657751 +0000 UTC m=+2736.443938898" Oct 06 08:02:26 crc kubenswrapper[4769]: I1006 08:02:26.509261 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:26 crc kubenswrapper[4769]: I1006 08:02:26.509901 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:26 crc kubenswrapper[4769]: I1006 08:02:26.556647 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:27 crc kubenswrapper[4769]: I1006 08:02:27.002020 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:27 crc kubenswrapper[4769]: I1006 08:02:27.050221 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxqhx"] Oct 06 08:02:28 crc kubenswrapper[4769]: I1006 08:02:28.963815 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dxqhx" podUID="941fd080-e290-4b27-bfa7-e614a837e13d" containerName="registry-server" containerID="cri-o://cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2" gracePeriod=2 Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.356399 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.464135 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/941fd080-e290-4b27-bfa7-e614a837e13d-utilities\") pod \"941fd080-e290-4b27-bfa7-e614a837e13d\" (UID: \"941fd080-e290-4b27-bfa7-e614a837e13d\") " Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.464186 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/941fd080-e290-4b27-bfa7-e614a837e13d-catalog-content\") pod \"941fd080-e290-4b27-bfa7-e614a837e13d\" (UID: \"941fd080-e290-4b27-bfa7-e614a837e13d\") " Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.464250 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvtzd\" (UniqueName: \"kubernetes.io/projected/941fd080-e290-4b27-bfa7-e614a837e13d-kube-api-access-mvtzd\") pod \"941fd080-e290-4b27-bfa7-e614a837e13d\" (UID: \"941fd080-e290-4b27-bfa7-e614a837e13d\") " Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.465140 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/941fd080-e290-4b27-bfa7-e614a837e13d-utilities" (OuterVolumeSpecName: "utilities") pod "941fd080-e290-4b27-bfa7-e614a837e13d" (UID: "941fd080-e290-4b27-bfa7-e614a837e13d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.465672 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/941fd080-e290-4b27-bfa7-e614a837e13d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.470027 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/941fd080-e290-4b27-bfa7-e614a837e13d-kube-api-access-mvtzd" (OuterVolumeSpecName: "kube-api-access-mvtzd") pod "941fd080-e290-4b27-bfa7-e614a837e13d" (UID: "941fd080-e290-4b27-bfa7-e614a837e13d"). InnerVolumeSpecName "kube-api-access-mvtzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.478650 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/941fd080-e290-4b27-bfa7-e614a837e13d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "941fd080-e290-4b27-bfa7-e614a837e13d" (UID: "941fd080-e290-4b27-bfa7-e614a837e13d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.567461 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/941fd080-e290-4b27-bfa7-e614a837e13d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.567502 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvtzd\" (UniqueName: \"kubernetes.io/projected/941fd080-e290-4b27-bfa7-e614a837e13d-kube-api-access-mvtzd\") on node \"crc\" DevicePath \"\"" Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.973834 4769 generic.go:334] "Generic (PLEG): container finished" podID="941fd080-e290-4b27-bfa7-e614a837e13d" containerID="cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2" exitCode=0 Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.973881 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxqhx" event={"ID":"941fd080-e290-4b27-bfa7-e614a837e13d","Type":"ContainerDied","Data":"cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2"} Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.973897 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxqhx" Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.973935 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxqhx" event={"ID":"941fd080-e290-4b27-bfa7-e614a837e13d","Type":"ContainerDied","Data":"cb5dc18a4c65a79004bc4fcf053edcd6a85efef1ea482cec8fdb187d1b8a89c1"} Oct 06 08:02:29 crc kubenswrapper[4769]: I1006 08:02:29.973961 4769 scope.go:117] "RemoveContainer" containerID="cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2" Oct 06 08:02:30 crc kubenswrapper[4769]: I1006 08:02:30.006369 4769 scope.go:117] "RemoveContainer" containerID="4d5fcc6c077fb1d7cd466f77b045ee35f5f2cbc47b0389aec5196de321353a9f" Oct 06 08:02:30 crc kubenswrapper[4769]: I1006 08:02:30.007696 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxqhx"] Oct 06 08:02:30 crc kubenswrapper[4769]: I1006 08:02:30.015172 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxqhx"] Oct 06 08:02:30 crc kubenswrapper[4769]: I1006 08:02:30.027089 4769 scope.go:117] "RemoveContainer" containerID="838ab83c1ec1ded953dc82925ac9fe1c09100db809968defdf7477dbd707e689" Oct 06 08:02:30 crc kubenswrapper[4769]: I1006 08:02:30.070345 4769 scope.go:117] "RemoveContainer" containerID="cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2" Oct 06 08:02:30 crc kubenswrapper[4769]: E1006 08:02:30.070960 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2\": container with ID starting with cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2 not found: ID does not exist" containerID="cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2" Oct 06 08:02:30 crc kubenswrapper[4769]: I1006 08:02:30.071002 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2"} err="failed to get container status \"cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2\": rpc error: code = NotFound desc = could not find container \"cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2\": container with ID starting with cf49cb423d4d88314ca02d64390e213879204aaaf9df8a11f683d509dc64a6a2 not found: ID does not exist" Oct 06 08:02:30 crc kubenswrapper[4769]: I1006 08:02:30.071027 4769 scope.go:117] "RemoveContainer" containerID="4d5fcc6c077fb1d7cd466f77b045ee35f5f2cbc47b0389aec5196de321353a9f" Oct 06 08:02:30 crc kubenswrapper[4769]: E1006 08:02:30.071544 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5fcc6c077fb1d7cd466f77b045ee35f5f2cbc47b0389aec5196de321353a9f\": container with ID starting with 4d5fcc6c077fb1d7cd466f77b045ee35f5f2cbc47b0389aec5196de321353a9f not found: ID does not exist" containerID="4d5fcc6c077fb1d7cd466f77b045ee35f5f2cbc47b0389aec5196de321353a9f" Oct 06 08:02:30 crc kubenswrapper[4769]: I1006 08:02:30.071583 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5fcc6c077fb1d7cd466f77b045ee35f5f2cbc47b0389aec5196de321353a9f"} err="failed to get container status \"4d5fcc6c077fb1d7cd466f77b045ee35f5f2cbc47b0389aec5196de321353a9f\": rpc error: code = NotFound desc = could not find container \"4d5fcc6c077fb1d7cd466f77b045ee35f5f2cbc47b0389aec5196de321353a9f\": container with ID starting with 4d5fcc6c077fb1d7cd466f77b045ee35f5f2cbc47b0389aec5196de321353a9f not found: ID does not exist" Oct 06 08:02:30 crc kubenswrapper[4769]: I1006 08:02:30.071610 4769 scope.go:117] "RemoveContainer" containerID="838ab83c1ec1ded953dc82925ac9fe1c09100db809968defdf7477dbd707e689" Oct 06 08:02:30 crc kubenswrapper[4769]: E1006 08:02:30.072009 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"838ab83c1ec1ded953dc82925ac9fe1c09100db809968defdf7477dbd707e689\": container with ID starting with 838ab83c1ec1ded953dc82925ac9fe1c09100db809968defdf7477dbd707e689 not found: ID does not exist" containerID="838ab83c1ec1ded953dc82925ac9fe1c09100db809968defdf7477dbd707e689" Oct 06 08:02:30 crc kubenswrapper[4769]: I1006 08:02:30.072047 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"838ab83c1ec1ded953dc82925ac9fe1c09100db809968defdf7477dbd707e689"} err="failed to get container status \"838ab83c1ec1ded953dc82925ac9fe1c09100db809968defdf7477dbd707e689\": rpc error: code = NotFound desc = could not find container \"838ab83c1ec1ded953dc82925ac9fe1c09100db809968defdf7477dbd707e689\": container with ID starting with 838ab83c1ec1ded953dc82925ac9fe1c09100db809968defdf7477dbd707e689 not found: ID does not exist" Oct 06 08:02:30 crc kubenswrapper[4769]: I1006 08:02:30.176336 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="941fd080-e290-4b27-bfa7-e614a837e13d" path="/var/lib/kubelet/pods/941fd080-e290-4b27-bfa7-e614a837e13d/volumes" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.432149 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6p6vx"] Oct 06 08:02:46 crc kubenswrapper[4769]: E1006 08:02:46.432994 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941fd080-e290-4b27-bfa7-e614a837e13d" containerName="extract-content" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.433006 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="941fd080-e290-4b27-bfa7-e614a837e13d" containerName="extract-content" Oct 06 08:02:46 crc kubenswrapper[4769]: E1006 08:02:46.433015 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941fd080-e290-4b27-bfa7-e614a837e13d" containerName="registry-server" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.433020 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="941fd080-e290-4b27-bfa7-e614a837e13d" containerName="registry-server" Oct 06 08:02:46 crc kubenswrapper[4769]: E1006 08:02:46.433050 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="941fd080-e290-4b27-bfa7-e614a837e13d" containerName="extract-utilities" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.433056 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="941fd080-e290-4b27-bfa7-e614a837e13d" containerName="extract-utilities" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.433269 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="941fd080-e290-4b27-bfa7-e614a837e13d" containerName="registry-server" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.434716 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.440974 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6p6vx"] Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.469116 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc8f9\" (UniqueName: \"kubernetes.io/projected/ee1eef46-431f-4be0-851e-c865a65291ea-kube-api-access-lc8f9\") pod \"certified-operators-6p6vx\" (UID: \"ee1eef46-431f-4be0-851e-c865a65291ea\") " pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.469330 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1eef46-431f-4be0-851e-c865a65291ea-utilities\") pod \"certified-operators-6p6vx\" (UID: \"ee1eef46-431f-4be0-851e-c865a65291ea\") " pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.469431 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1eef46-431f-4be0-851e-c865a65291ea-catalog-content\") pod \"certified-operators-6p6vx\" (UID: \"ee1eef46-431f-4be0-851e-c865a65291ea\") " pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.570737 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc8f9\" (UniqueName: \"kubernetes.io/projected/ee1eef46-431f-4be0-851e-c865a65291ea-kube-api-access-lc8f9\") pod \"certified-operators-6p6vx\" (UID: \"ee1eef46-431f-4be0-851e-c865a65291ea\") " pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.570830 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1eef46-431f-4be0-851e-c865a65291ea-utilities\") pod \"certified-operators-6p6vx\" (UID: \"ee1eef46-431f-4be0-851e-c865a65291ea\") " pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.570861 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1eef46-431f-4be0-851e-c865a65291ea-catalog-content\") pod \"certified-operators-6p6vx\" (UID: \"ee1eef46-431f-4be0-851e-c865a65291ea\") " pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.571333 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1eef46-431f-4be0-851e-c865a65291ea-catalog-content\") pod \"certified-operators-6p6vx\" (UID: \"ee1eef46-431f-4be0-851e-c865a65291ea\") " pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.571826 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1eef46-431f-4be0-851e-c865a65291ea-utilities\") pod \"certified-operators-6p6vx\" (UID: \"ee1eef46-431f-4be0-851e-c865a65291ea\") " pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.613727 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc8f9\" (UniqueName: \"kubernetes.io/projected/ee1eef46-431f-4be0-851e-c865a65291ea-kube-api-access-lc8f9\") pod \"certified-operators-6p6vx\" (UID: \"ee1eef46-431f-4be0-851e-c865a65291ea\") " pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:46 crc kubenswrapper[4769]: I1006 08:02:46.755383 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:47 crc kubenswrapper[4769]: I1006 08:02:47.236107 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6p6vx"] Oct 06 08:02:48 crc kubenswrapper[4769]: I1006 08:02:48.137031 4769 generic.go:334] "Generic (PLEG): container finished" podID="ee1eef46-431f-4be0-851e-c865a65291ea" containerID="ad84b74a9120caebce4e9fbfb21f9345174d8f2f81377855e6b490cb769f3bba" exitCode=0 Oct 06 08:02:48 crc kubenswrapper[4769]: I1006 08:02:48.137074 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p6vx" event={"ID":"ee1eef46-431f-4be0-851e-c865a65291ea","Type":"ContainerDied","Data":"ad84b74a9120caebce4e9fbfb21f9345174d8f2f81377855e6b490cb769f3bba"} Oct 06 08:02:48 crc kubenswrapper[4769]: I1006 08:02:48.137103 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p6vx" event={"ID":"ee1eef46-431f-4be0-851e-c865a65291ea","Type":"ContainerStarted","Data":"e6640c00284e55b002fd105183c2ce644b1a53d69c4911ce04cce48204698485"} Oct 06 08:02:50 crc kubenswrapper[4769]: I1006 08:02:50.154629 4769 generic.go:334] "Generic (PLEG): container finished" podID="ee1eef46-431f-4be0-851e-c865a65291ea" containerID="ef5f355438a6b5b07ca870ea11c1f18951d824fdb17d0abfcbb70e4e38d2c2d2" exitCode=0 Oct 06 08:02:50 crc kubenswrapper[4769]: I1006 08:02:50.154766 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p6vx" event={"ID":"ee1eef46-431f-4be0-851e-c865a65291ea","Type":"ContainerDied","Data":"ef5f355438a6b5b07ca870ea11c1f18951d824fdb17d0abfcbb70e4e38d2c2d2"} Oct 06 08:02:51 crc kubenswrapper[4769]: I1006 08:02:51.165246 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p6vx" event={"ID":"ee1eef46-431f-4be0-851e-c865a65291ea","Type":"ContainerStarted","Data":"cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1"} Oct 06 08:02:51 crc kubenswrapper[4769]: I1006 08:02:51.185758 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6p6vx" podStartSLOduration=2.577366979 podStartE2EDuration="5.185730946s" podCreationTimestamp="2025-10-06 08:02:46 +0000 UTC" firstStartedPulling="2025-10-06 08:02:48.138639971 +0000 UTC m=+2764.662921128" lastFinishedPulling="2025-10-06 08:02:50.747003918 +0000 UTC m=+2767.271285095" observedRunningTime="2025-10-06 08:02:51.181888281 +0000 UTC m=+2767.706169428" watchObservedRunningTime="2025-10-06 08:02:51.185730946 +0000 UTC m=+2767.710012093" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.609799 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pjhm4"] Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.612248 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.634166 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pjhm4"] Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.668040 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fr49\" (UniqueName: \"kubernetes.io/projected/0da973a2-d26c-4b26-a296-f550b8f7075c-kube-api-access-7fr49\") pod \"community-operators-pjhm4\" (UID: \"0da973a2-d26c-4b26-a296-f550b8f7075c\") " pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.668390 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da973a2-d26c-4b26-a296-f550b8f7075c-utilities\") pod \"community-operators-pjhm4\" (UID: \"0da973a2-d26c-4b26-a296-f550b8f7075c\") " pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.668734 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da973a2-d26c-4b26-a296-f550b8f7075c-catalog-content\") pod \"community-operators-pjhm4\" (UID: \"0da973a2-d26c-4b26-a296-f550b8f7075c\") " pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.756279 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.756344 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.770711 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da973a2-d26c-4b26-a296-f550b8f7075c-catalog-content\") pod \"community-operators-pjhm4\" (UID: \"0da973a2-d26c-4b26-a296-f550b8f7075c\") " pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.770831 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fr49\" (UniqueName: \"kubernetes.io/projected/0da973a2-d26c-4b26-a296-f550b8f7075c-kube-api-access-7fr49\") pod \"community-operators-pjhm4\" (UID: \"0da973a2-d26c-4b26-a296-f550b8f7075c\") " pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.770879 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da973a2-d26c-4b26-a296-f550b8f7075c-utilities\") pod \"community-operators-pjhm4\" (UID: \"0da973a2-d26c-4b26-a296-f550b8f7075c\") " pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.771326 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da973a2-d26c-4b26-a296-f550b8f7075c-catalog-content\") pod \"community-operators-pjhm4\" (UID: \"0da973a2-d26c-4b26-a296-f550b8f7075c\") " pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.774479 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da973a2-d26c-4b26-a296-f550b8f7075c-utilities\") pod \"community-operators-pjhm4\" (UID: \"0da973a2-d26c-4b26-a296-f550b8f7075c\") " pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.792491 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fr49\" (UniqueName: \"kubernetes.io/projected/0da973a2-d26c-4b26-a296-f550b8f7075c-kube-api-access-7fr49\") pod \"community-operators-pjhm4\" (UID: \"0da973a2-d26c-4b26-a296-f550b8f7075c\") " pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.808192 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:56 crc kubenswrapper[4769]: I1006 08:02:56.933278 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:02:57 crc kubenswrapper[4769]: I1006 08:02:57.266808 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:57 crc kubenswrapper[4769]: I1006 08:02:57.407197 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pjhm4"] Oct 06 08:02:58 crc kubenswrapper[4769]: I1006 08:02:58.225604 4769 generic.go:334] "Generic (PLEG): container finished" podID="0da973a2-d26c-4b26-a296-f550b8f7075c" containerID="60deda32836445f8e211fb70eae7c1a32c397724a577be87ef6056cbb8e6c770" exitCode=0 Oct 06 08:02:58 crc kubenswrapper[4769]: I1006 08:02:58.225735 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjhm4" event={"ID":"0da973a2-d26c-4b26-a296-f550b8f7075c","Type":"ContainerDied","Data":"60deda32836445f8e211fb70eae7c1a32c397724a577be87ef6056cbb8e6c770"} Oct 06 08:02:58 crc kubenswrapper[4769]: I1006 08:02:58.225908 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjhm4" event={"ID":"0da973a2-d26c-4b26-a296-f550b8f7075c","Type":"ContainerStarted","Data":"c7ce3bacfb15e89018b2138f136025ffe3b8ca03702f42ec38881a3f7009c5ae"} Oct 06 08:02:59 crc kubenswrapper[4769]: I1006 08:02:59.165148 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6p6vx"] Oct 06 08:02:59 crc kubenswrapper[4769]: I1006 08:02:59.232468 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6p6vx" podUID="ee1eef46-431f-4be0-851e-c865a65291ea" containerName="registry-server" containerID="cri-o://cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1" gracePeriod=2 Oct 06 08:02:59 crc kubenswrapper[4769]: I1006 08:02:59.675991 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:02:59 crc kubenswrapper[4769]: I1006 08:02:59.719535 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1eef46-431f-4be0-851e-c865a65291ea-utilities\") pod \"ee1eef46-431f-4be0-851e-c865a65291ea\" (UID: \"ee1eef46-431f-4be0-851e-c865a65291ea\") " Oct 06 08:02:59 crc kubenswrapper[4769]: I1006 08:02:59.719667 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc8f9\" (UniqueName: \"kubernetes.io/projected/ee1eef46-431f-4be0-851e-c865a65291ea-kube-api-access-lc8f9\") pod \"ee1eef46-431f-4be0-851e-c865a65291ea\" (UID: \"ee1eef46-431f-4be0-851e-c865a65291ea\") " Oct 06 08:02:59 crc kubenswrapper[4769]: I1006 08:02:59.719693 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1eef46-431f-4be0-851e-c865a65291ea-catalog-content\") pod \"ee1eef46-431f-4be0-851e-c865a65291ea\" (UID: \"ee1eef46-431f-4be0-851e-c865a65291ea\") " Oct 06 08:02:59 crc kubenswrapper[4769]: I1006 08:02:59.720780 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee1eef46-431f-4be0-851e-c865a65291ea-utilities" (OuterVolumeSpecName: "utilities") pod "ee1eef46-431f-4be0-851e-c865a65291ea" (UID: "ee1eef46-431f-4be0-851e-c865a65291ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:02:59 crc kubenswrapper[4769]: I1006 08:02:59.727292 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee1eef46-431f-4be0-851e-c865a65291ea-kube-api-access-lc8f9" (OuterVolumeSpecName: "kube-api-access-lc8f9") pod "ee1eef46-431f-4be0-851e-c865a65291ea" (UID: "ee1eef46-431f-4be0-851e-c865a65291ea"). InnerVolumeSpecName "kube-api-access-lc8f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:02:59 crc kubenswrapper[4769]: I1006 08:02:59.780714 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee1eef46-431f-4be0-851e-c865a65291ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee1eef46-431f-4be0-851e-c865a65291ea" (UID: "ee1eef46-431f-4be0-851e-c865a65291ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:02:59 crc kubenswrapper[4769]: I1006 08:02:59.822159 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc8f9\" (UniqueName: \"kubernetes.io/projected/ee1eef46-431f-4be0-851e-c865a65291ea-kube-api-access-lc8f9\") on node \"crc\" DevicePath \"\"" Oct 06 08:02:59 crc kubenswrapper[4769]: I1006 08:02:59.822466 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee1eef46-431f-4be0-851e-c865a65291ea-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:02:59 crc kubenswrapper[4769]: I1006 08:02:59.822475 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee1eef46-431f-4be0-851e-c865a65291ea-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.243091 4769 generic.go:334] "Generic (PLEG): container finished" podID="ee1eef46-431f-4be0-851e-c865a65291ea" containerID="cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1" exitCode=0 Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.243165 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6p6vx" Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.243176 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p6vx" event={"ID":"ee1eef46-431f-4be0-851e-c865a65291ea","Type":"ContainerDied","Data":"cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1"} Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.243259 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6p6vx" event={"ID":"ee1eef46-431f-4be0-851e-c865a65291ea","Type":"ContainerDied","Data":"e6640c00284e55b002fd105183c2ce644b1a53d69c4911ce04cce48204698485"} Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.243715 4769 scope.go:117] "RemoveContainer" containerID="cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1" Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.249625 4769 generic.go:334] "Generic (PLEG): container finished" podID="0da973a2-d26c-4b26-a296-f550b8f7075c" containerID="9208e2c041d3aed72c08572d3ee2d9764271e4d2c0ff693e717867caf153a910" exitCode=0 Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.249704 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjhm4" event={"ID":"0da973a2-d26c-4b26-a296-f550b8f7075c","Type":"ContainerDied","Data":"9208e2c041d3aed72c08572d3ee2d9764271e4d2c0ff693e717867caf153a910"} Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.277775 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6p6vx"] Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.283602 4769 scope.go:117] "RemoveContainer" containerID="ef5f355438a6b5b07ca870ea11c1f18951d824fdb17d0abfcbb70e4e38d2c2d2" Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.284890 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6p6vx"] Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.311594 4769 scope.go:117] "RemoveContainer" containerID="ad84b74a9120caebce4e9fbfb21f9345174d8f2f81377855e6b490cb769f3bba" Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.328967 4769 scope.go:117] "RemoveContainer" containerID="cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1" Oct 06 08:03:00 crc kubenswrapper[4769]: E1006 08:03:00.330113 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1\": container with ID starting with cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1 not found: ID does not exist" containerID="cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1" Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.330154 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1"} err="failed to get container status \"cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1\": rpc error: code = NotFound desc = could not find container \"cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1\": container with ID starting with cb26ef79b9de36b8b3c12f125fe1645d87478be7e8109a52a06491db903e9af1 not found: ID does not exist" Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.330183 4769 scope.go:117] "RemoveContainer" containerID="ef5f355438a6b5b07ca870ea11c1f18951d824fdb17d0abfcbb70e4e38d2c2d2" Oct 06 08:03:00 crc kubenswrapper[4769]: E1006 08:03:00.330607 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef5f355438a6b5b07ca870ea11c1f18951d824fdb17d0abfcbb70e4e38d2c2d2\": container with ID starting with ef5f355438a6b5b07ca870ea11c1f18951d824fdb17d0abfcbb70e4e38d2c2d2 not found: ID does not exist" containerID="ef5f355438a6b5b07ca870ea11c1f18951d824fdb17d0abfcbb70e4e38d2c2d2" Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.330646 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5f355438a6b5b07ca870ea11c1f18951d824fdb17d0abfcbb70e4e38d2c2d2"} err="failed to get container status \"ef5f355438a6b5b07ca870ea11c1f18951d824fdb17d0abfcbb70e4e38d2c2d2\": rpc error: code = NotFound desc = could not find container \"ef5f355438a6b5b07ca870ea11c1f18951d824fdb17d0abfcbb70e4e38d2c2d2\": container with ID starting with ef5f355438a6b5b07ca870ea11c1f18951d824fdb17d0abfcbb70e4e38d2c2d2 not found: ID does not exist" Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.330677 4769 scope.go:117] "RemoveContainer" containerID="ad84b74a9120caebce4e9fbfb21f9345174d8f2f81377855e6b490cb769f3bba" Oct 06 08:03:00 crc kubenswrapper[4769]: E1006 08:03:00.330952 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad84b74a9120caebce4e9fbfb21f9345174d8f2f81377855e6b490cb769f3bba\": container with ID starting with ad84b74a9120caebce4e9fbfb21f9345174d8f2f81377855e6b490cb769f3bba not found: ID does not exist" containerID="ad84b74a9120caebce4e9fbfb21f9345174d8f2f81377855e6b490cb769f3bba" Oct 06 08:03:00 crc kubenswrapper[4769]: I1006 08:03:00.330976 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad84b74a9120caebce4e9fbfb21f9345174d8f2f81377855e6b490cb769f3bba"} err="failed to get container status \"ad84b74a9120caebce4e9fbfb21f9345174d8f2f81377855e6b490cb769f3bba\": rpc error: code = NotFound desc = could not find container \"ad84b74a9120caebce4e9fbfb21f9345174d8f2f81377855e6b490cb769f3bba\": container with ID starting with ad84b74a9120caebce4e9fbfb21f9345174d8f2f81377855e6b490cb769f3bba not found: ID does not exist" Oct 06 08:03:01 crc kubenswrapper[4769]: I1006 08:03:01.277322 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjhm4" event={"ID":"0da973a2-d26c-4b26-a296-f550b8f7075c","Type":"ContainerStarted","Data":"f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb"} Oct 06 08:03:01 crc kubenswrapper[4769]: I1006 08:03:01.293154 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pjhm4" podStartSLOduration=2.799402846 podStartE2EDuration="5.293136233s" podCreationTimestamp="2025-10-06 08:02:56 +0000 UTC" firstStartedPulling="2025-10-06 08:02:58.228041456 +0000 UTC m=+2774.752322603" lastFinishedPulling="2025-10-06 08:03:00.721774843 +0000 UTC m=+2777.246055990" observedRunningTime="2025-10-06 08:03:01.292024933 +0000 UTC m=+2777.816306090" watchObservedRunningTime="2025-10-06 08:03:01.293136233 +0000 UTC m=+2777.817417380" Oct 06 08:03:02 crc kubenswrapper[4769]: I1006 08:03:02.179494 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee1eef46-431f-4be0-851e-c865a65291ea" path="/var/lib/kubelet/pods/ee1eef46-431f-4be0-851e-c865a65291ea/volumes" Oct 06 08:03:06 crc kubenswrapper[4769]: I1006 08:03:06.933808 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:03:06 crc kubenswrapper[4769]: I1006 08:03:06.934362 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:03:06 crc kubenswrapper[4769]: I1006 08:03:06.983558 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:03:07 crc kubenswrapper[4769]: I1006 08:03:07.370550 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:03:07 crc kubenswrapper[4769]: I1006 08:03:07.436546 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pjhm4"] Oct 06 08:03:09 crc kubenswrapper[4769]: I1006 08:03:09.346013 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pjhm4" podUID="0da973a2-d26c-4b26-a296-f550b8f7075c" containerName="registry-server" containerID="cri-o://f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb" gracePeriod=2 Oct 06 08:03:09 crc kubenswrapper[4769]: I1006 08:03:09.802218 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:03:09 crc kubenswrapper[4769]: I1006 08:03:09.900897 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fr49\" (UniqueName: \"kubernetes.io/projected/0da973a2-d26c-4b26-a296-f550b8f7075c-kube-api-access-7fr49\") pod \"0da973a2-d26c-4b26-a296-f550b8f7075c\" (UID: \"0da973a2-d26c-4b26-a296-f550b8f7075c\") " Oct 06 08:03:09 crc kubenswrapper[4769]: I1006 08:03:09.901304 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da973a2-d26c-4b26-a296-f550b8f7075c-catalog-content\") pod \"0da973a2-d26c-4b26-a296-f550b8f7075c\" (UID: \"0da973a2-d26c-4b26-a296-f550b8f7075c\") " Oct 06 08:03:09 crc kubenswrapper[4769]: I1006 08:03:09.901517 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da973a2-d26c-4b26-a296-f550b8f7075c-utilities\") pod \"0da973a2-d26c-4b26-a296-f550b8f7075c\" (UID: \"0da973a2-d26c-4b26-a296-f550b8f7075c\") " Oct 06 08:03:09 crc kubenswrapper[4769]: I1006 08:03:09.902464 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0da973a2-d26c-4b26-a296-f550b8f7075c-utilities" (OuterVolumeSpecName: "utilities") pod "0da973a2-d26c-4b26-a296-f550b8f7075c" (UID: "0da973a2-d26c-4b26-a296-f550b8f7075c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:03:09 crc kubenswrapper[4769]: I1006 08:03:09.907979 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da973a2-d26c-4b26-a296-f550b8f7075c-kube-api-access-7fr49" (OuterVolumeSpecName: "kube-api-access-7fr49") pod "0da973a2-d26c-4b26-a296-f550b8f7075c" (UID: "0da973a2-d26c-4b26-a296-f550b8f7075c"). InnerVolumeSpecName "kube-api-access-7fr49". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:03:09 crc kubenswrapper[4769]: I1006 08:03:09.950074 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0da973a2-d26c-4b26-a296-f550b8f7075c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0da973a2-d26c-4b26-a296-f550b8f7075c" (UID: "0da973a2-d26c-4b26-a296-f550b8f7075c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.003474 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fr49\" (UniqueName: \"kubernetes.io/projected/0da973a2-d26c-4b26-a296-f550b8f7075c-kube-api-access-7fr49\") on node \"crc\" DevicePath \"\"" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.003506 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0da973a2-d26c-4b26-a296-f550b8f7075c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.003521 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0da973a2-d26c-4b26-a296-f550b8f7075c-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.364445 4769 generic.go:334] "Generic (PLEG): container finished" podID="0da973a2-d26c-4b26-a296-f550b8f7075c" containerID="f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb" exitCode=0 Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.364499 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjhm4" event={"ID":"0da973a2-d26c-4b26-a296-f550b8f7075c","Type":"ContainerDied","Data":"f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb"} Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.364529 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjhm4" event={"ID":"0da973a2-d26c-4b26-a296-f550b8f7075c","Type":"ContainerDied","Data":"c7ce3bacfb15e89018b2138f136025ffe3b8ca03702f42ec38881a3f7009c5ae"} Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.364556 4769 scope.go:117] "RemoveContainer" containerID="f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.364764 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjhm4" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.391950 4769 scope.go:117] "RemoveContainer" containerID="9208e2c041d3aed72c08572d3ee2d9764271e4d2c0ff693e717867caf153a910" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.395201 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pjhm4"] Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.403400 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pjhm4"] Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.419716 4769 scope.go:117] "RemoveContainer" containerID="60deda32836445f8e211fb70eae7c1a32c397724a577be87ef6056cbb8e6c770" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.451850 4769 scope.go:117] "RemoveContainer" containerID="f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb" Oct 06 08:03:10 crc kubenswrapper[4769]: E1006 08:03:10.452304 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb\": container with ID starting with f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb not found: ID does not exist" containerID="f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.452368 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb"} err="failed to get container status \"f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb\": rpc error: code = NotFound desc = could not find container \"f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb\": container with ID starting with f79bc0e087c47113cbe99dc1f29cb932223917e649be2c2764ab72508b8fb0eb not found: ID does not exist" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.452397 4769 scope.go:117] "RemoveContainer" containerID="9208e2c041d3aed72c08572d3ee2d9764271e4d2c0ff693e717867caf153a910" Oct 06 08:03:10 crc kubenswrapper[4769]: E1006 08:03:10.452730 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9208e2c041d3aed72c08572d3ee2d9764271e4d2c0ff693e717867caf153a910\": container with ID starting with 9208e2c041d3aed72c08572d3ee2d9764271e4d2c0ff693e717867caf153a910 not found: ID does not exist" containerID="9208e2c041d3aed72c08572d3ee2d9764271e4d2c0ff693e717867caf153a910" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.452756 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9208e2c041d3aed72c08572d3ee2d9764271e4d2c0ff693e717867caf153a910"} err="failed to get container status \"9208e2c041d3aed72c08572d3ee2d9764271e4d2c0ff693e717867caf153a910\": rpc error: code = NotFound desc = could not find container \"9208e2c041d3aed72c08572d3ee2d9764271e4d2c0ff693e717867caf153a910\": container with ID starting with 9208e2c041d3aed72c08572d3ee2d9764271e4d2c0ff693e717867caf153a910 not found: ID does not exist" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.452769 4769 scope.go:117] "RemoveContainer" containerID="60deda32836445f8e211fb70eae7c1a32c397724a577be87ef6056cbb8e6c770" Oct 06 08:03:10 crc kubenswrapper[4769]: E1006 08:03:10.452975 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60deda32836445f8e211fb70eae7c1a32c397724a577be87ef6056cbb8e6c770\": container with ID starting with 60deda32836445f8e211fb70eae7c1a32c397724a577be87ef6056cbb8e6c770 not found: ID does not exist" containerID="60deda32836445f8e211fb70eae7c1a32c397724a577be87ef6056cbb8e6c770" Oct 06 08:03:10 crc kubenswrapper[4769]: I1006 08:03:10.453000 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60deda32836445f8e211fb70eae7c1a32c397724a577be87ef6056cbb8e6c770"} err="failed to get container status \"60deda32836445f8e211fb70eae7c1a32c397724a577be87ef6056cbb8e6c770\": rpc error: code = NotFound desc = could not find container \"60deda32836445f8e211fb70eae7c1a32c397724a577be87ef6056cbb8e6c770\": container with ID starting with 60deda32836445f8e211fb70eae7c1a32c397724a577be87ef6056cbb8e6c770 not found: ID does not exist" Oct 06 08:03:12 crc kubenswrapper[4769]: I1006 08:03:12.177245 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da973a2-d26c-4b26-a296-f550b8f7075c" path="/var/lib/kubelet/pods/0da973a2-d26c-4b26-a296-f550b8f7075c/volumes" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.302627 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-98s7j"] Oct 06 08:03:25 crc kubenswrapper[4769]: E1006 08:03:25.304676 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da973a2-d26c-4b26-a296-f550b8f7075c" containerName="extract-utilities" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.304695 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da973a2-d26c-4b26-a296-f550b8f7075c" containerName="extract-utilities" Oct 06 08:03:25 crc kubenswrapper[4769]: E1006 08:03:25.304720 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da973a2-d26c-4b26-a296-f550b8f7075c" containerName="registry-server" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.304730 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da973a2-d26c-4b26-a296-f550b8f7075c" containerName="registry-server" Oct 06 08:03:25 crc kubenswrapper[4769]: E1006 08:03:25.304762 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1eef46-431f-4be0-851e-c865a65291ea" containerName="extract-utilities" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.304772 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1eef46-431f-4be0-851e-c865a65291ea" containerName="extract-utilities" Oct 06 08:03:25 crc kubenswrapper[4769]: E1006 08:03:25.304784 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da973a2-d26c-4b26-a296-f550b8f7075c" containerName="extract-content" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.304794 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da973a2-d26c-4b26-a296-f550b8f7075c" containerName="extract-content" Oct 06 08:03:25 crc kubenswrapper[4769]: E1006 08:03:25.304816 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1eef46-431f-4be0-851e-c865a65291ea" containerName="registry-server" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.304823 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1eef46-431f-4be0-851e-c865a65291ea" containerName="registry-server" Oct 06 08:03:25 crc kubenswrapper[4769]: E1006 08:03:25.304843 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee1eef46-431f-4be0-851e-c865a65291ea" containerName="extract-content" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.304852 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee1eef46-431f-4be0-851e-c865a65291ea" containerName="extract-content" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.305111 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da973a2-d26c-4b26-a296-f550b8f7075c" containerName="registry-server" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.305128 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee1eef46-431f-4be0-851e-c865a65291ea" containerName="registry-server" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.309571 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.321917 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98s7j"] Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.395041 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e99509-4559-4842-9916-aa43b5aa0d7a-catalog-content\") pod \"redhat-operators-98s7j\" (UID: \"f9e99509-4559-4842-9916-aa43b5aa0d7a\") " pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.395111 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffpl4\" (UniqueName: \"kubernetes.io/projected/f9e99509-4559-4842-9916-aa43b5aa0d7a-kube-api-access-ffpl4\") pod \"redhat-operators-98s7j\" (UID: \"f9e99509-4559-4842-9916-aa43b5aa0d7a\") " pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.395138 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e99509-4559-4842-9916-aa43b5aa0d7a-utilities\") pod \"redhat-operators-98s7j\" (UID: \"f9e99509-4559-4842-9916-aa43b5aa0d7a\") " pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.496647 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffpl4\" (UniqueName: \"kubernetes.io/projected/f9e99509-4559-4842-9916-aa43b5aa0d7a-kube-api-access-ffpl4\") pod \"redhat-operators-98s7j\" (UID: \"f9e99509-4559-4842-9916-aa43b5aa0d7a\") " pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.496694 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e99509-4559-4842-9916-aa43b5aa0d7a-utilities\") pod \"redhat-operators-98s7j\" (UID: \"f9e99509-4559-4842-9916-aa43b5aa0d7a\") " pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.496859 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e99509-4559-4842-9916-aa43b5aa0d7a-catalog-content\") pod \"redhat-operators-98s7j\" (UID: \"f9e99509-4559-4842-9916-aa43b5aa0d7a\") " pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.497424 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e99509-4559-4842-9916-aa43b5aa0d7a-utilities\") pod \"redhat-operators-98s7j\" (UID: \"f9e99509-4559-4842-9916-aa43b5aa0d7a\") " pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.498015 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e99509-4559-4842-9916-aa43b5aa0d7a-catalog-content\") pod \"redhat-operators-98s7j\" (UID: \"f9e99509-4559-4842-9916-aa43b5aa0d7a\") " pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.526318 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffpl4\" (UniqueName: \"kubernetes.io/projected/f9e99509-4559-4842-9916-aa43b5aa0d7a-kube-api-access-ffpl4\") pod \"redhat-operators-98s7j\" (UID: \"f9e99509-4559-4842-9916-aa43b5aa0d7a\") " pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:25 crc kubenswrapper[4769]: I1006 08:03:25.634911 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:26 crc kubenswrapper[4769]: I1006 08:03:26.089504 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98s7j"] Oct 06 08:03:26 crc kubenswrapper[4769]: I1006 08:03:26.501226 4769 generic.go:334] "Generic (PLEG): container finished" podID="f9e99509-4559-4842-9916-aa43b5aa0d7a" containerID="91437830c3557450230a8ab0f6be1703323111add15ec8ca2a0a69f719f44f15" exitCode=0 Oct 06 08:03:26 crc kubenswrapper[4769]: I1006 08:03:26.501272 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98s7j" event={"ID":"f9e99509-4559-4842-9916-aa43b5aa0d7a","Type":"ContainerDied","Data":"91437830c3557450230a8ab0f6be1703323111add15ec8ca2a0a69f719f44f15"} Oct 06 08:03:26 crc kubenswrapper[4769]: I1006 08:03:26.501610 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98s7j" event={"ID":"f9e99509-4559-4842-9916-aa43b5aa0d7a","Type":"ContainerStarted","Data":"9ef94bc454369ad6212c4fd7bb2bfe07bd7b23cee40d9970ba6342397370d395"} Oct 06 08:03:27 crc kubenswrapper[4769]: I1006 08:03:27.516042 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98s7j" event={"ID":"f9e99509-4559-4842-9916-aa43b5aa0d7a","Type":"ContainerStarted","Data":"cafdcd17a3356148aa3dd5562ab7d53bdd96a47014af141a165cd24bb36cde14"} Oct 06 08:03:28 crc kubenswrapper[4769]: I1006 08:03:28.525755 4769 generic.go:334] "Generic (PLEG): container finished" podID="f9e99509-4559-4842-9916-aa43b5aa0d7a" containerID="cafdcd17a3356148aa3dd5562ab7d53bdd96a47014af141a165cd24bb36cde14" exitCode=0 Oct 06 08:03:28 crc kubenswrapper[4769]: I1006 08:03:28.525810 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98s7j" event={"ID":"f9e99509-4559-4842-9916-aa43b5aa0d7a","Type":"ContainerDied","Data":"cafdcd17a3356148aa3dd5562ab7d53bdd96a47014af141a165cd24bb36cde14"} Oct 06 08:03:29 crc kubenswrapper[4769]: I1006 08:03:29.538178 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98s7j" event={"ID":"f9e99509-4559-4842-9916-aa43b5aa0d7a","Type":"ContainerStarted","Data":"173c89c6169289228d0f5bbaedfc5c231325927bb49997065c014e87d728bcf6"} Oct 06 08:03:29 crc kubenswrapper[4769]: I1006 08:03:29.563913 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-98s7j" podStartSLOduration=2.041298575 podStartE2EDuration="4.563894278s" podCreationTimestamp="2025-10-06 08:03:25 +0000 UTC" firstStartedPulling="2025-10-06 08:03:26.502646727 +0000 UTC m=+2803.026927874" lastFinishedPulling="2025-10-06 08:03:29.02524244 +0000 UTC m=+2805.549523577" observedRunningTime="2025-10-06 08:03:29.558173131 +0000 UTC m=+2806.082454288" watchObservedRunningTime="2025-10-06 08:03:29.563894278 +0000 UTC m=+2806.088175415" Oct 06 08:03:35 crc kubenswrapper[4769]: I1006 08:03:35.635947 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:35 crc kubenswrapper[4769]: I1006 08:03:35.636803 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:35 crc kubenswrapper[4769]: I1006 08:03:35.684632 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:36 crc kubenswrapper[4769]: I1006 08:03:36.674026 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:36 crc kubenswrapper[4769]: I1006 08:03:36.722865 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98s7j"] Oct 06 08:03:38 crc kubenswrapper[4769]: I1006 08:03:38.621831 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-98s7j" podUID="f9e99509-4559-4842-9916-aa43b5aa0d7a" containerName="registry-server" containerID="cri-o://173c89c6169289228d0f5bbaedfc5c231325927bb49997065c014e87d728bcf6" gracePeriod=2 Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.634237 4769 generic.go:334] "Generic (PLEG): container finished" podID="f9e99509-4559-4842-9916-aa43b5aa0d7a" containerID="173c89c6169289228d0f5bbaedfc5c231325927bb49997065c014e87d728bcf6" exitCode=0 Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.634394 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98s7j" event={"ID":"f9e99509-4559-4842-9916-aa43b5aa0d7a","Type":"ContainerDied","Data":"173c89c6169289228d0f5bbaedfc5c231325927bb49997065c014e87d728bcf6"} Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.634695 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98s7j" event={"ID":"f9e99509-4559-4842-9916-aa43b5aa0d7a","Type":"ContainerDied","Data":"9ef94bc454369ad6212c4fd7bb2bfe07bd7b23cee40d9970ba6342397370d395"} Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.634734 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ef94bc454369ad6212c4fd7bb2bfe07bd7b23cee40d9970ba6342397370d395" Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.690657 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.786436 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e99509-4559-4842-9916-aa43b5aa0d7a-catalog-content\") pod \"f9e99509-4559-4842-9916-aa43b5aa0d7a\" (UID: \"f9e99509-4559-4842-9916-aa43b5aa0d7a\") " Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.786560 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e99509-4559-4842-9916-aa43b5aa0d7a-utilities\") pod \"f9e99509-4559-4842-9916-aa43b5aa0d7a\" (UID: \"f9e99509-4559-4842-9916-aa43b5aa0d7a\") " Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.787389 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9e99509-4559-4842-9916-aa43b5aa0d7a-utilities" (OuterVolumeSpecName: "utilities") pod "f9e99509-4559-4842-9916-aa43b5aa0d7a" (UID: "f9e99509-4559-4842-9916-aa43b5aa0d7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.787680 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffpl4\" (UniqueName: \"kubernetes.io/projected/f9e99509-4559-4842-9916-aa43b5aa0d7a-kube-api-access-ffpl4\") pod \"f9e99509-4559-4842-9916-aa43b5aa0d7a\" (UID: \"f9e99509-4559-4842-9916-aa43b5aa0d7a\") " Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.789702 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9e99509-4559-4842-9916-aa43b5aa0d7a-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.792996 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e99509-4559-4842-9916-aa43b5aa0d7a-kube-api-access-ffpl4" (OuterVolumeSpecName: "kube-api-access-ffpl4") pod "f9e99509-4559-4842-9916-aa43b5aa0d7a" (UID: "f9e99509-4559-4842-9916-aa43b5aa0d7a"). InnerVolumeSpecName "kube-api-access-ffpl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.867883 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9e99509-4559-4842-9916-aa43b5aa0d7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9e99509-4559-4842-9916-aa43b5aa0d7a" (UID: "f9e99509-4559-4842-9916-aa43b5aa0d7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.891658 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffpl4\" (UniqueName: \"kubernetes.io/projected/f9e99509-4559-4842-9916-aa43b5aa0d7a-kube-api-access-ffpl4\") on node \"crc\" DevicePath \"\"" Oct 06 08:03:39 crc kubenswrapper[4769]: I1006 08:03:39.891936 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9e99509-4559-4842-9916-aa43b5aa0d7a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:03:40 crc kubenswrapper[4769]: I1006 08:03:40.642380 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98s7j" Oct 06 08:03:40 crc kubenswrapper[4769]: I1006 08:03:40.665400 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98s7j"] Oct 06 08:03:40 crc kubenswrapper[4769]: I1006 08:03:40.673549 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-98s7j"] Oct 06 08:03:42 crc kubenswrapper[4769]: I1006 08:03:42.183704 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e99509-4559-4842-9916-aa43b5aa0d7a" path="/var/lib/kubelet/pods/f9e99509-4559-4842-9916-aa43b5aa0d7a/volumes" Oct 06 08:03:52 crc kubenswrapper[4769]: I1006 08:03:52.245449 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:03:52 crc kubenswrapper[4769]: I1006 08:03:52.246082 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:04:22 crc kubenswrapper[4769]: I1006 08:04:22.245537 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:04:22 crc kubenswrapper[4769]: I1006 08:04:22.246067 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:04:52 crc kubenswrapper[4769]: I1006 08:04:52.245665 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:04:52 crc kubenswrapper[4769]: I1006 08:04:52.246167 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:04:52 crc kubenswrapper[4769]: I1006 08:04:52.246214 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 08:04:52 crc kubenswrapper[4769]: I1006 08:04:52.247056 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 08:04:52 crc kubenswrapper[4769]: I1006 08:04:52.247106 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" gracePeriod=600 Oct 06 08:04:52 crc kubenswrapper[4769]: E1006 08:04:52.391912 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:04:53 crc kubenswrapper[4769]: I1006 08:04:53.221616 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" exitCode=0 Oct 06 08:04:53 crc kubenswrapper[4769]: I1006 08:04:53.221707 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053"} Oct 06 08:04:53 crc kubenswrapper[4769]: I1006 08:04:53.221972 4769 scope.go:117] "RemoveContainer" containerID="999e13f471b67ab7936a89c575090a45fd008e7550add0684f123bf3635044da" Oct 06 08:04:53 crc kubenswrapper[4769]: I1006 08:04:53.222969 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:04:53 crc kubenswrapper[4769]: E1006 08:04:53.223550 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:05:06 crc kubenswrapper[4769]: I1006 08:05:06.165786 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:05:06 crc kubenswrapper[4769]: E1006 08:05:06.166551 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:05:19 crc kubenswrapper[4769]: I1006 08:05:19.166266 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:05:19 crc kubenswrapper[4769]: E1006 08:05:19.166966 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:05:31 crc kubenswrapper[4769]: I1006 08:05:31.167356 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:05:31 crc kubenswrapper[4769]: E1006 08:05:31.168468 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:05:44 crc kubenswrapper[4769]: I1006 08:05:44.172099 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:05:44 crc kubenswrapper[4769]: E1006 08:05:44.173013 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:05:57 crc kubenswrapper[4769]: I1006 08:05:57.166538 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:05:57 crc kubenswrapper[4769]: E1006 08:05:57.167402 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:06:12 crc kubenswrapper[4769]: I1006 08:06:12.166605 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:06:12 crc kubenswrapper[4769]: E1006 08:06:12.168338 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:06:25 crc kubenswrapper[4769]: I1006 08:06:25.166114 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:06:25 crc kubenswrapper[4769]: E1006 08:06:25.166887 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:06:38 crc kubenswrapper[4769]: I1006 08:06:38.166387 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:06:38 crc kubenswrapper[4769]: E1006 08:06:38.167454 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:06:49 crc kubenswrapper[4769]: I1006 08:06:49.165750 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:06:49 crc kubenswrapper[4769]: E1006 08:06:49.167639 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:07:02 crc kubenswrapper[4769]: I1006 08:07:02.167837 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:07:02 crc kubenswrapper[4769]: E1006 08:07:02.168698 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:07:16 crc kubenswrapper[4769]: I1006 08:07:16.166228 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:07:16 crc kubenswrapper[4769]: E1006 08:07:16.167181 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:07:29 crc kubenswrapper[4769]: I1006 08:07:29.166142 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:07:29 crc kubenswrapper[4769]: E1006 08:07:29.167507 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:07:40 crc kubenswrapper[4769]: I1006 08:07:40.166710 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:07:40 crc kubenswrapper[4769]: E1006 08:07:40.167644 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:07:51 crc kubenswrapper[4769]: I1006 08:07:51.167474 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:07:51 crc kubenswrapper[4769]: E1006 08:07:51.168890 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:08:02 crc kubenswrapper[4769]: I1006 08:08:02.166001 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:08:02 crc kubenswrapper[4769]: E1006 08:08:02.166995 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:08:17 crc kubenswrapper[4769]: I1006 08:08:17.165834 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:08:17 crc kubenswrapper[4769]: E1006 08:08:17.166745 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:08:32 crc kubenswrapper[4769]: I1006 08:08:32.166303 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:08:32 crc kubenswrapper[4769]: E1006 08:08:32.167172 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:08:47 crc kubenswrapper[4769]: I1006 08:08:47.166689 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:08:47 crc kubenswrapper[4769]: E1006 08:08:47.167641 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:09:00 crc kubenswrapper[4769]: I1006 08:09:00.166554 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:09:00 crc kubenswrapper[4769]: E1006 08:09:00.167210 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:09:15 crc kubenswrapper[4769]: I1006 08:09:15.166030 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:09:15 crc kubenswrapper[4769]: E1006 08:09:15.167137 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:09:28 crc kubenswrapper[4769]: I1006 08:09:28.168573 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:09:28 crc kubenswrapper[4769]: E1006 08:09:28.169743 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:09:40 crc kubenswrapper[4769]: I1006 08:09:40.165983 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:09:40 crc kubenswrapper[4769]: E1006 08:09:40.166744 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:09:54 crc kubenswrapper[4769]: I1006 08:09:54.170403 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:09:54 crc kubenswrapper[4769]: I1006 08:09:54.809753 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"c57a2dd74af9836792972b3f8b827d061d4c30d911cf38db52862074d0c3f2c4"} Oct 06 08:09:57 crc kubenswrapper[4769]: I1006 08:09:57.342153 4769 scope.go:117] "RemoveContainer" containerID="173c89c6169289228d0f5bbaedfc5c231325927bb49997065c014e87d728bcf6" Oct 06 08:09:57 crc kubenswrapper[4769]: I1006 08:09:57.364635 4769 scope.go:117] "RemoveContainer" containerID="cafdcd17a3356148aa3dd5562ab7d53bdd96a47014af141a165cd24bb36cde14" Oct 06 08:09:57 crc kubenswrapper[4769]: I1006 08:09:57.383508 4769 scope.go:117] "RemoveContainer" containerID="91437830c3557450230a8ab0f6be1703323111add15ec8ca2a0a69f719f44f15" Oct 06 08:12:22 crc kubenswrapper[4769]: I1006 08:12:22.246483 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:12:22 crc kubenswrapper[4769]: I1006 08:12:22.247104 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:12:52 crc kubenswrapper[4769]: I1006 08:12:52.246091 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:12:52 crc kubenswrapper[4769]: I1006 08:12:52.246733 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:13:22 crc kubenswrapper[4769]: I1006 08:13:22.245862 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:13:22 crc kubenswrapper[4769]: I1006 08:13:22.246610 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:13:22 crc kubenswrapper[4769]: I1006 08:13:22.246672 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 08:13:22 crc kubenswrapper[4769]: I1006 08:13:22.247640 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c57a2dd74af9836792972b3f8b827d061d4c30d911cf38db52862074d0c3f2c4"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 08:13:22 crc kubenswrapper[4769]: I1006 08:13:22.247745 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://c57a2dd74af9836792972b3f8b827d061d4c30d911cf38db52862074d0c3f2c4" gracePeriod=600 Oct 06 08:13:22 crc kubenswrapper[4769]: I1006 08:13:22.536405 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="c57a2dd74af9836792972b3f8b827d061d4c30d911cf38db52862074d0c3f2c4" exitCode=0 Oct 06 08:13:22 crc kubenswrapper[4769]: I1006 08:13:22.536643 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"c57a2dd74af9836792972b3f8b827d061d4c30d911cf38db52862074d0c3f2c4"} Oct 06 08:13:22 crc kubenswrapper[4769]: I1006 08:13:22.536712 4769 scope.go:117] "RemoveContainer" containerID="a0da7afe3b9863cec9e28cf9803f02278de86e91cfd82472052d65dc217d7053" Oct 06 08:13:23 crc kubenswrapper[4769]: I1006 08:13:23.545970 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1"} Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.747045 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8schf"] Oct 06 08:13:41 crc kubenswrapper[4769]: E1006 08:13:41.748047 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e99509-4559-4842-9916-aa43b5aa0d7a" containerName="extract-content" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.748062 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e99509-4559-4842-9916-aa43b5aa0d7a" containerName="extract-content" Oct 06 08:13:41 crc kubenswrapper[4769]: E1006 08:13:41.748088 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e99509-4559-4842-9916-aa43b5aa0d7a" containerName="extract-utilities" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.748097 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e99509-4559-4842-9916-aa43b5aa0d7a" containerName="extract-utilities" Oct 06 08:13:41 crc kubenswrapper[4769]: E1006 08:13:41.748116 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e99509-4559-4842-9916-aa43b5aa0d7a" containerName="registry-server" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.748124 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e99509-4559-4842-9916-aa43b5aa0d7a" containerName="registry-server" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.748358 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e99509-4559-4842-9916-aa43b5aa0d7a" containerName="registry-server" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.750124 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.759018 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8schf"] Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.893409 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtgd5\" (UniqueName: \"kubernetes.io/projected/0c5762de-8c9e-4e69-abff-31985cc9c038-kube-api-access-gtgd5\") pod \"community-operators-8schf\" (UID: \"0c5762de-8c9e-4e69-abff-31985cc9c038\") " pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.893563 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c5762de-8c9e-4e69-abff-31985cc9c038-utilities\") pod \"community-operators-8schf\" (UID: \"0c5762de-8c9e-4e69-abff-31985cc9c038\") " pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.894402 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c5762de-8c9e-4e69-abff-31985cc9c038-catalog-content\") pod \"community-operators-8schf\" (UID: \"0c5762de-8c9e-4e69-abff-31985cc9c038\") " pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.996403 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c5762de-8c9e-4e69-abff-31985cc9c038-utilities\") pod \"community-operators-8schf\" (UID: \"0c5762de-8c9e-4e69-abff-31985cc9c038\") " pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.996501 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c5762de-8c9e-4e69-abff-31985cc9c038-catalog-content\") pod \"community-operators-8schf\" (UID: \"0c5762de-8c9e-4e69-abff-31985cc9c038\") " pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.996542 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtgd5\" (UniqueName: \"kubernetes.io/projected/0c5762de-8c9e-4e69-abff-31985cc9c038-kube-api-access-gtgd5\") pod \"community-operators-8schf\" (UID: \"0c5762de-8c9e-4e69-abff-31985cc9c038\") " pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.997145 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c5762de-8c9e-4e69-abff-31985cc9c038-utilities\") pod \"community-operators-8schf\" (UID: \"0c5762de-8c9e-4e69-abff-31985cc9c038\") " pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:41 crc kubenswrapper[4769]: I1006 08:13:41.997203 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c5762de-8c9e-4e69-abff-31985cc9c038-catalog-content\") pod \"community-operators-8schf\" (UID: \"0c5762de-8c9e-4e69-abff-31985cc9c038\") " pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:42 crc kubenswrapper[4769]: I1006 08:13:42.019139 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtgd5\" (UniqueName: \"kubernetes.io/projected/0c5762de-8c9e-4e69-abff-31985cc9c038-kube-api-access-gtgd5\") pod \"community-operators-8schf\" (UID: \"0c5762de-8c9e-4e69-abff-31985cc9c038\") " pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:42 crc kubenswrapper[4769]: I1006 08:13:42.073046 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:42 crc kubenswrapper[4769]: I1006 08:13:42.608378 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8schf"] Oct 06 08:13:42 crc kubenswrapper[4769]: I1006 08:13:42.688482 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8schf" event={"ID":"0c5762de-8c9e-4e69-abff-31985cc9c038","Type":"ContainerStarted","Data":"b690155ff7dc04ef9a2f92bd2f243bae3a977247753842e5f171035a85f4a9d0"} Oct 06 08:13:43 crc kubenswrapper[4769]: I1006 08:13:43.700718 4769 generic.go:334] "Generic (PLEG): container finished" podID="0c5762de-8c9e-4e69-abff-31985cc9c038" containerID="b218cc3d8fc07910ffb8e02561980d51d9c863e04ed11a7da0494ef37be572c5" exitCode=0 Oct 06 08:13:43 crc kubenswrapper[4769]: I1006 08:13:43.700797 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8schf" event={"ID":"0c5762de-8c9e-4e69-abff-31985cc9c038","Type":"ContainerDied","Data":"b218cc3d8fc07910ffb8e02561980d51d9c863e04ed11a7da0494ef37be572c5"} Oct 06 08:13:43 crc kubenswrapper[4769]: I1006 08:13:43.708100 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 08:13:44 crc kubenswrapper[4769]: I1006 08:13:44.711532 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8schf" event={"ID":"0c5762de-8c9e-4e69-abff-31985cc9c038","Type":"ContainerStarted","Data":"78ecb5d8f9207e27669d0c409631cf784d20c0e7077f71d0a48844a794049ad9"} Oct 06 08:13:45 crc kubenswrapper[4769]: I1006 08:13:45.720076 4769 generic.go:334] "Generic (PLEG): container finished" podID="0c5762de-8c9e-4e69-abff-31985cc9c038" containerID="78ecb5d8f9207e27669d0c409631cf784d20c0e7077f71d0a48844a794049ad9" exitCode=0 Oct 06 08:13:45 crc kubenswrapper[4769]: I1006 08:13:45.720119 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8schf" event={"ID":"0c5762de-8c9e-4e69-abff-31985cc9c038","Type":"ContainerDied","Data":"78ecb5d8f9207e27669d0c409631cf784d20c0e7077f71d0a48844a794049ad9"} Oct 06 08:13:46 crc kubenswrapper[4769]: I1006 08:13:46.732889 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8schf" event={"ID":"0c5762de-8c9e-4e69-abff-31985cc9c038","Type":"ContainerStarted","Data":"59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365"} Oct 06 08:13:46 crc kubenswrapper[4769]: I1006 08:13:46.757945 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8schf" podStartSLOduration=3.333815779 podStartE2EDuration="5.757915273s" podCreationTimestamp="2025-10-06 08:13:41 +0000 UTC" firstStartedPulling="2025-10-06 08:13:43.707699864 +0000 UTC m=+3420.231981021" lastFinishedPulling="2025-10-06 08:13:46.131799368 +0000 UTC m=+3422.656080515" observedRunningTime="2025-10-06 08:13:46.757132281 +0000 UTC m=+3423.281413438" watchObservedRunningTime="2025-10-06 08:13:46.757915273 +0000 UTC m=+3423.282196450" Oct 06 08:13:52 crc kubenswrapper[4769]: I1006 08:13:52.073617 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:52 crc kubenswrapper[4769]: I1006 08:13:52.074406 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:52 crc kubenswrapper[4769]: I1006 08:13:52.115235 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:52 crc kubenswrapper[4769]: I1006 08:13:52.824779 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:52 crc kubenswrapper[4769]: I1006 08:13:52.878366 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8schf"] Oct 06 08:13:54 crc kubenswrapper[4769]: I1006 08:13:54.796125 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8schf" podUID="0c5762de-8c9e-4e69-abff-31985cc9c038" containerName="registry-server" containerID="cri-o://59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365" gracePeriod=2 Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.740182 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.805936 4769 generic.go:334] "Generic (PLEG): container finished" podID="0c5762de-8c9e-4e69-abff-31985cc9c038" containerID="59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365" exitCode=0 Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.805977 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8schf" event={"ID":"0c5762de-8c9e-4e69-abff-31985cc9c038","Type":"ContainerDied","Data":"59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365"} Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.806008 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8schf" event={"ID":"0c5762de-8c9e-4e69-abff-31985cc9c038","Type":"ContainerDied","Data":"b690155ff7dc04ef9a2f92bd2f243bae3a977247753842e5f171035a85f4a9d0"} Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.806007 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8schf" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.806027 4769 scope.go:117] "RemoveContainer" containerID="59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.827874 4769 scope.go:117] "RemoveContainer" containerID="78ecb5d8f9207e27669d0c409631cf784d20c0e7077f71d0a48844a794049ad9" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.849290 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c5762de-8c9e-4e69-abff-31985cc9c038-utilities\") pod \"0c5762de-8c9e-4e69-abff-31985cc9c038\" (UID: \"0c5762de-8c9e-4e69-abff-31985cc9c038\") " Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.849539 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c5762de-8c9e-4e69-abff-31985cc9c038-catalog-content\") pod \"0c5762de-8c9e-4e69-abff-31985cc9c038\" (UID: \"0c5762de-8c9e-4e69-abff-31985cc9c038\") " Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.849667 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtgd5\" (UniqueName: \"kubernetes.io/projected/0c5762de-8c9e-4e69-abff-31985cc9c038-kube-api-access-gtgd5\") pod \"0c5762de-8c9e-4e69-abff-31985cc9c038\" (UID: \"0c5762de-8c9e-4e69-abff-31985cc9c038\") " Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.850404 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c5762de-8c9e-4e69-abff-31985cc9c038-utilities" (OuterVolumeSpecName: "utilities") pod "0c5762de-8c9e-4e69-abff-31985cc9c038" (UID: "0c5762de-8c9e-4e69-abff-31985cc9c038"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.857124 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5762de-8c9e-4e69-abff-31985cc9c038-kube-api-access-gtgd5" (OuterVolumeSpecName: "kube-api-access-gtgd5") pod "0c5762de-8c9e-4e69-abff-31985cc9c038" (UID: "0c5762de-8c9e-4e69-abff-31985cc9c038"). InnerVolumeSpecName "kube-api-access-gtgd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.877623 4769 scope.go:117] "RemoveContainer" containerID="b218cc3d8fc07910ffb8e02561980d51d9c863e04ed11a7da0494ef37be572c5" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.897956 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c5762de-8c9e-4e69-abff-31985cc9c038-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c5762de-8c9e-4e69-abff-31985cc9c038" (UID: "0c5762de-8c9e-4e69-abff-31985cc9c038"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.928382 4769 scope.go:117] "RemoveContainer" containerID="59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365" Oct 06 08:13:55 crc kubenswrapper[4769]: E1006 08:13:55.928883 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365\": container with ID starting with 59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365 not found: ID does not exist" containerID="59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.928914 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365"} err="failed to get container status \"59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365\": rpc error: code = NotFound desc = could not find container \"59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365\": container with ID starting with 59209b79c178e09c8c19dd173f2db0e2938f7dd6735a7b000702de26f85e7365 not found: ID does not exist" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.928934 4769 scope.go:117] "RemoveContainer" containerID="78ecb5d8f9207e27669d0c409631cf784d20c0e7077f71d0a48844a794049ad9" Oct 06 08:13:55 crc kubenswrapper[4769]: E1006 08:13:55.929226 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ecb5d8f9207e27669d0c409631cf784d20c0e7077f71d0a48844a794049ad9\": container with ID starting with 78ecb5d8f9207e27669d0c409631cf784d20c0e7077f71d0a48844a794049ad9 not found: ID does not exist" containerID="78ecb5d8f9207e27669d0c409631cf784d20c0e7077f71d0a48844a794049ad9" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.929250 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ecb5d8f9207e27669d0c409631cf784d20c0e7077f71d0a48844a794049ad9"} err="failed to get container status \"78ecb5d8f9207e27669d0c409631cf784d20c0e7077f71d0a48844a794049ad9\": rpc error: code = NotFound desc = could not find container \"78ecb5d8f9207e27669d0c409631cf784d20c0e7077f71d0a48844a794049ad9\": container with ID starting with 78ecb5d8f9207e27669d0c409631cf784d20c0e7077f71d0a48844a794049ad9 not found: ID does not exist" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.929262 4769 scope.go:117] "RemoveContainer" containerID="b218cc3d8fc07910ffb8e02561980d51d9c863e04ed11a7da0494ef37be572c5" Oct 06 08:13:55 crc kubenswrapper[4769]: E1006 08:13:55.929478 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b218cc3d8fc07910ffb8e02561980d51d9c863e04ed11a7da0494ef37be572c5\": container with ID starting with b218cc3d8fc07910ffb8e02561980d51d9c863e04ed11a7da0494ef37be572c5 not found: ID does not exist" containerID="b218cc3d8fc07910ffb8e02561980d51d9c863e04ed11a7da0494ef37be572c5" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.929495 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b218cc3d8fc07910ffb8e02561980d51d9c863e04ed11a7da0494ef37be572c5"} err="failed to get container status \"b218cc3d8fc07910ffb8e02561980d51d9c863e04ed11a7da0494ef37be572c5\": rpc error: code = NotFound desc = could not find container \"b218cc3d8fc07910ffb8e02561980d51d9c863e04ed11a7da0494ef37be572c5\": container with ID starting with b218cc3d8fc07910ffb8e02561980d51d9c863e04ed11a7da0494ef37be572c5 not found: ID does not exist" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.952274 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c5762de-8c9e-4e69-abff-31985cc9c038-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.952303 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtgd5\" (UniqueName: \"kubernetes.io/projected/0c5762de-8c9e-4e69-abff-31985cc9c038-kube-api-access-gtgd5\") on node \"crc\" DevicePath \"\"" Oct 06 08:13:55 crc kubenswrapper[4769]: I1006 08:13:55.952314 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c5762de-8c9e-4e69-abff-31985cc9c038-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:13:56 crc kubenswrapper[4769]: I1006 08:13:56.137654 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8schf"] Oct 06 08:13:56 crc kubenswrapper[4769]: I1006 08:13:56.145942 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8schf"] Oct 06 08:13:56 crc kubenswrapper[4769]: I1006 08:13:56.176286 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c5762de-8c9e-4e69-abff-31985cc9c038" path="/var/lib/kubelet/pods/0c5762de-8c9e-4e69-abff-31985cc9c038/volumes" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.272119 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cbbms"] Oct 06 08:14:48 crc kubenswrapper[4769]: E1006 08:14:48.272988 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5762de-8c9e-4e69-abff-31985cc9c038" containerName="extract-content" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.273000 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5762de-8c9e-4e69-abff-31985cc9c038" containerName="extract-content" Oct 06 08:14:48 crc kubenswrapper[4769]: E1006 08:14:48.273018 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5762de-8c9e-4e69-abff-31985cc9c038" containerName="extract-utilities" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.273025 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5762de-8c9e-4e69-abff-31985cc9c038" containerName="extract-utilities" Oct 06 08:14:48 crc kubenswrapper[4769]: E1006 08:14:48.273052 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5762de-8c9e-4e69-abff-31985cc9c038" containerName="registry-server" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.273058 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5762de-8c9e-4e69-abff-31985cc9c038" containerName="registry-server" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.273231 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5762de-8c9e-4e69-abff-31985cc9c038" containerName="registry-server" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.274559 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.288934 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbbms"] Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.408769 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-utilities\") pod \"redhat-operators-cbbms\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.408903 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f58pw\" (UniqueName: \"kubernetes.io/projected/b1b02926-d618-4766-91cc-d34118522d58-kube-api-access-f58pw\") pod \"redhat-operators-cbbms\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.408999 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-catalog-content\") pod \"redhat-operators-cbbms\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.511071 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-catalog-content\") pod \"redhat-operators-cbbms\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.511398 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-utilities\") pod \"redhat-operators-cbbms\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.511559 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f58pw\" (UniqueName: \"kubernetes.io/projected/b1b02926-d618-4766-91cc-d34118522d58-kube-api-access-f58pw\") pod \"redhat-operators-cbbms\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.511771 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-catalog-content\") pod \"redhat-operators-cbbms\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.511899 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-utilities\") pod \"redhat-operators-cbbms\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.533190 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f58pw\" (UniqueName: \"kubernetes.io/projected/b1b02926-d618-4766-91cc-d34118522d58-kube-api-access-f58pw\") pod \"redhat-operators-cbbms\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:48 crc kubenswrapper[4769]: I1006 08:14:48.600898 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:49 crc kubenswrapper[4769]: I1006 08:14:49.041183 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cbbms"] Oct 06 08:14:49 crc kubenswrapper[4769]: I1006 08:14:49.228291 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbbms" event={"ID":"b1b02926-d618-4766-91cc-d34118522d58","Type":"ContainerStarted","Data":"91bd95d5a860e60f7f3c8cf53283a190c07f3ad95819acfe4a695faf3abcee7a"} Oct 06 08:14:49 crc kubenswrapper[4769]: I1006 08:14:49.228576 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbbms" event={"ID":"b1b02926-d618-4766-91cc-d34118522d58","Type":"ContainerStarted","Data":"da11835b0ebedae58b8d420c77b912f239dcfd7c2447e3625a683d8d3b57387d"} Oct 06 08:14:50 crc kubenswrapper[4769]: I1006 08:14:50.235682 4769 generic.go:334] "Generic (PLEG): container finished" podID="b1b02926-d618-4766-91cc-d34118522d58" containerID="91bd95d5a860e60f7f3c8cf53283a190c07f3ad95819acfe4a695faf3abcee7a" exitCode=0 Oct 06 08:14:50 crc kubenswrapper[4769]: I1006 08:14:50.235752 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbbms" event={"ID":"b1b02926-d618-4766-91cc-d34118522d58","Type":"ContainerDied","Data":"91bd95d5a860e60f7f3c8cf53283a190c07f3ad95819acfe4a695faf3abcee7a"} Oct 06 08:14:51 crc kubenswrapper[4769]: I1006 08:14:51.245490 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbbms" event={"ID":"b1b02926-d618-4766-91cc-d34118522d58","Type":"ContainerStarted","Data":"f6f12f420be3b95f7dccbb1bf2d4305bddcfc797ed0098b5ee77a78b0384ef2b"} Oct 06 08:14:52 crc kubenswrapper[4769]: I1006 08:14:52.255444 4769 generic.go:334] "Generic (PLEG): container finished" podID="b1b02926-d618-4766-91cc-d34118522d58" containerID="f6f12f420be3b95f7dccbb1bf2d4305bddcfc797ed0098b5ee77a78b0384ef2b" exitCode=0 Oct 06 08:14:52 crc kubenswrapper[4769]: I1006 08:14:52.255550 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbbms" event={"ID":"b1b02926-d618-4766-91cc-d34118522d58","Type":"ContainerDied","Data":"f6f12f420be3b95f7dccbb1bf2d4305bddcfc797ed0098b5ee77a78b0384ef2b"} Oct 06 08:14:54 crc kubenswrapper[4769]: I1006 08:14:54.283651 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbbms" event={"ID":"b1b02926-d618-4766-91cc-d34118522d58","Type":"ContainerStarted","Data":"df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8"} Oct 06 08:14:54 crc kubenswrapper[4769]: I1006 08:14:54.302730 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cbbms" podStartSLOduration=3.427928539 podStartE2EDuration="6.302715033s" podCreationTimestamp="2025-10-06 08:14:48 +0000 UTC" firstStartedPulling="2025-10-06 08:14:50.237222698 +0000 UTC m=+3486.761503845" lastFinishedPulling="2025-10-06 08:14:53.112009192 +0000 UTC m=+3489.636290339" observedRunningTime="2025-10-06 08:14:54.301087409 +0000 UTC m=+3490.825368556" watchObservedRunningTime="2025-10-06 08:14:54.302715033 +0000 UTC m=+3490.826996190" Oct 06 08:14:58 crc kubenswrapper[4769]: I1006 08:14:58.602227 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:58 crc kubenswrapper[4769]: I1006 08:14:58.602926 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:58 crc kubenswrapper[4769]: I1006 08:14:58.650264 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:59 crc kubenswrapper[4769]: I1006 08:14:59.365073 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:14:59 crc kubenswrapper[4769]: I1006 08:14:59.414451 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbbms"] Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.181917 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm"] Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.184136 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.186127 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.186123 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.222326 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqwpt\" (UniqueName: \"kubernetes.io/projected/de82ba69-8d52-4a72-8162-abf8300513e8-kube-api-access-dqwpt\") pod \"collect-profiles-29328975-htkbm\" (UID: \"de82ba69-8d52-4a72-8162-abf8300513e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.222406 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de82ba69-8d52-4a72-8162-abf8300513e8-config-volume\") pod \"collect-profiles-29328975-htkbm\" (UID: \"de82ba69-8d52-4a72-8162-abf8300513e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.222636 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de82ba69-8d52-4a72-8162-abf8300513e8-secret-volume\") pod \"collect-profiles-29328975-htkbm\" (UID: \"de82ba69-8d52-4a72-8162-abf8300513e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.234136 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm"] Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.324682 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqwpt\" (UniqueName: \"kubernetes.io/projected/de82ba69-8d52-4a72-8162-abf8300513e8-kube-api-access-dqwpt\") pod \"collect-profiles-29328975-htkbm\" (UID: \"de82ba69-8d52-4a72-8162-abf8300513e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.324745 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de82ba69-8d52-4a72-8162-abf8300513e8-config-volume\") pod \"collect-profiles-29328975-htkbm\" (UID: \"de82ba69-8d52-4a72-8162-abf8300513e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.324850 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de82ba69-8d52-4a72-8162-abf8300513e8-secret-volume\") pod \"collect-profiles-29328975-htkbm\" (UID: \"de82ba69-8d52-4a72-8162-abf8300513e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.325611 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de82ba69-8d52-4a72-8162-abf8300513e8-config-volume\") pod \"collect-profiles-29328975-htkbm\" (UID: \"de82ba69-8d52-4a72-8162-abf8300513e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.330380 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de82ba69-8d52-4a72-8162-abf8300513e8-secret-volume\") pod \"collect-profiles-29328975-htkbm\" (UID: \"de82ba69-8d52-4a72-8162-abf8300513e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.340031 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqwpt\" (UniqueName: \"kubernetes.io/projected/de82ba69-8d52-4a72-8162-abf8300513e8-kube-api-access-dqwpt\") pod \"collect-profiles-29328975-htkbm\" (UID: \"de82ba69-8d52-4a72-8162-abf8300513e8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.514105 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:00 crc kubenswrapper[4769]: I1006 08:15:00.932569 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm"] Oct 06 08:15:01 crc kubenswrapper[4769]: I1006 08:15:01.337101 4769 generic.go:334] "Generic (PLEG): container finished" podID="de82ba69-8d52-4a72-8162-abf8300513e8" containerID="c0a462f280d5207323446090553cf08759b02fdd23cd274b7f2e5e9470f09edc" exitCode=0 Oct 06 08:15:01 crc kubenswrapper[4769]: I1006 08:15:01.337164 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" event={"ID":"de82ba69-8d52-4a72-8162-abf8300513e8","Type":"ContainerDied","Data":"c0a462f280d5207323446090553cf08759b02fdd23cd274b7f2e5e9470f09edc"} Oct 06 08:15:01 crc kubenswrapper[4769]: I1006 08:15:01.337452 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" event={"ID":"de82ba69-8d52-4a72-8162-abf8300513e8","Type":"ContainerStarted","Data":"9298ade620a3d0854d7841020ac51e2f74715b2b14e2cae3f495aec43fad8f33"} Oct 06 08:15:01 crc kubenswrapper[4769]: I1006 08:15:01.337596 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cbbms" podUID="b1b02926-d618-4766-91cc-d34118522d58" containerName="registry-server" containerID="cri-o://df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8" gracePeriod=2 Oct 06 08:15:01 crc kubenswrapper[4769]: I1006 08:15:01.758379 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:15:01 crc kubenswrapper[4769]: I1006 08:15:01.856140 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-utilities\") pod \"b1b02926-d618-4766-91cc-d34118522d58\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " Oct 06 08:15:01 crc kubenswrapper[4769]: I1006 08:15:01.856235 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f58pw\" (UniqueName: \"kubernetes.io/projected/b1b02926-d618-4766-91cc-d34118522d58-kube-api-access-f58pw\") pod \"b1b02926-d618-4766-91cc-d34118522d58\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " Oct 06 08:15:01 crc kubenswrapper[4769]: I1006 08:15:01.856315 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-catalog-content\") pod \"b1b02926-d618-4766-91cc-d34118522d58\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " Oct 06 08:15:01 crc kubenswrapper[4769]: I1006 08:15:01.857222 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-utilities" (OuterVolumeSpecName: "utilities") pod "b1b02926-d618-4766-91cc-d34118522d58" (UID: "b1b02926-d618-4766-91cc-d34118522d58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:15:01 crc kubenswrapper[4769]: I1006 08:15:01.862672 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b02926-d618-4766-91cc-d34118522d58-kube-api-access-f58pw" (OuterVolumeSpecName: "kube-api-access-f58pw") pod "b1b02926-d618-4766-91cc-d34118522d58" (UID: "b1b02926-d618-4766-91cc-d34118522d58"). InnerVolumeSpecName "kube-api-access-f58pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:15:01 crc kubenswrapper[4769]: I1006 08:15:01.959502 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f58pw\" (UniqueName: \"kubernetes.io/projected/b1b02926-d618-4766-91cc-d34118522d58-kube-api-access-f58pw\") on node \"crc\" DevicePath \"\"" Oct 06 08:15:01 crc kubenswrapper[4769]: I1006 08:15:01.959526 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.059820 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1b02926-d618-4766-91cc-d34118522d58" (UID: "b1b02926-d618-4766-91cc-d34118522d58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.060476 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-catalog-content\") pod \"b1b02926-d618-4766-91cc-d34118522d58\" (UID: \"b1b02926-d618-4766-91cc-d34118522d58\") " Oct 06 08:15:02 crc kubenswrapper[4769]: W1006 08:15:02.060615 4769 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b1b02926-d618-4766-91cc-d34118522d58/volumes/kubernetes.io~empty-dir/catalog-content Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.060647 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1b02926-d618-4766-91cc-d34118522d58" (UID: "b1b02926-d618-4766-91cc-d34118522d58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.061015 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1b02926-d618-4766-91cc-d34118522d58-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.346817 4769 generic.go:334] "Generic (PLEG): container finished" podID="b1b02926-d618-4766-91cc-d34118522d58" containerID="df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8" exitCode=0 Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.346911 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbbms" event={"ID":"b1b02926-d618-4766-91cc-d34118522d58","Type":"ContainerDied","Data":"df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8"} Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.346938 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cbbms" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.346978 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cbbms" event={"ID":"b1b02926-d618-4766-91cc-d34118522d58","Type":"ContainerDied","Data":"da11835b0ebedae58b8d420c77b912f239dcfd7c2447e3625a683d8d3b57387d"} Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.347004 4769 scope.go:117] "RemoveContainer" containerID="df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.373297 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cbbms"] Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.380735 4769 scope.go:117] "RemoveContainer" containerID="f6f12f420be3b95f7dccbb1bf2d4305bddcfc797ed0098b5ee77a78b0384ef2b" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.381410 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cbbms"] Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.403047 4769 scope.go:117] "RemoveContainer" containerID="91bd95d5a860e60f7f3c8cf53283a190c07f3ad95819acfe4a695faf3abcee7a" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.448228 4769 scope.go:117] "RemoveContainer" containerID="df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8" Oct 06 08:15:02 crc kubenswrapper[4769]: E1006 08:15:02.450533 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8\": container with ID starting with df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8 not found: ID does not exist" containerID="df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.450571 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8"} err="failed to get container status \"df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8\": rpc error: code = NotFound desc = could not find container \"df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8\": container with ID starting with df00d9a10a626d46226eda7a5680aac01c91afeca24aab47d9164307a85595c8 not found: ID does not exist" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.450601 4769 scope.go:117] "RemoveContainer" containerID="f6f12f420be3b95f7dccbb1bf2d4305bddcfc797ed0098b5ee77a78b0384ef2b" Oct 06 08:15:02 crc kubenswrapper[4769]: E1006 08:15:02.450944 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6f12f420be3b95f7dccbb1bf2d4305bddcfc797ed0098b5ee77a78b0384ef2b\": container with ID starting with f6f12f420be3b95f7dccbb1bf2d4305bddcfc797ed0098b5ee77a78b0384ef2b not found: ID does not exist" containerID="f6f12f420be3b95f7dccbb1bf2d4305bddcfc797ed0098b5ee77a78b0384ef2b" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.450968 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f12f420be3b95f7dccbb1bf2d4305bddcfc797ed0098b5ee77a78b0384ef2b"} err="failed to get container status \"f6f12f420be3b95f7dccbb1bf2d4305bddcfc797ed0098b5ee77a78b0384ef2b\": rpc error: code = NotFound desc = could not find container \"f6f12f420be3b95f7dccbb1bf2d4305bddcfc797ed0098b5ee77a78b0384ef2b\": container with ID starting with f6f12f420be3b95f7dccbb1bf2d4305bddcfc797ed0098b5ee77a78b0384ef2b not found: ID does not exist" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.450985 4769 scope.go:117] "RemoveContainer" containerID="91bd95d5a860e60f7f3c8cf53283a190c07f3ad95819acfe4a695faf3abcee7a" Oct 06 08:15:02 crc kubenswrapper[4769]: E1006 08:15:02.451761 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91bd95d5a860e60f7f3c8cf53283a190c07f3ad95819acfe4a695faf3abcee7a\": container with ID starting with 91bd95d5a860e60f7f3c8cf53283a190c07f3ad95819acfe4a695faf3abcee7a not found: ID does not exist" containerID="91bd95d5a860e60f7f3c8cf53283a190c07f3ad95819acfe4a695faf3abcee7a" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.451833 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91bd95d5a860e60f7f3c8cf53283a190c07f3ad95819acfe4a695faf3abcee7a"} err="failed to get container status \"91bd95d5a860e60f7f3c8cf53283a190c07f3ad95819acfe4a695faf3abcee7a\": rpc error: code = NotFound desc = could not find container \"91bd95d5a860e60f7f3c8cf53283a190c07f3ad95819acfe4a695faf3abcee7a\": container with ID starting with 91bd95d5a860e60f7f3c8cf53283a190c07f3ad95819acfe4a695faf3abcee7a not found: ID does not exist" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.674333 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.773503 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de82ba69-8d52-4a72-8162-abf8300513e8-config-volume\") pod \"de82ba69-8d52-4a72-8162-abf8300513e8\" (UID: \"de82ba69-8d52-4a72-8162-abf8300513e8\") " Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.773617 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqwpt\" (UniqueName: \"kubernetes.io/projected/de82ba69-8d52-4a72-8162-abf8300513e8-kube-api-access-dqwpt\") pod \"de82ba69-8d52-4a72-8162-abf8300513e8\" (UID: \"de82ba69-8d52-4a72-8162-abf8300513e8\") " Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.773692 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de82ba69-8d52-4a72-8162-abf8300513e8-secret-volume\") pod \"de82ba69-8d52-4a72-8162-abf8300513e8\" (UID: \"de82ba69-8d52-4a72-8162-abf8300513e8\") " Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.774285 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de82ba69-8d52-4a72-8162-abf8300513e8-config-volume" (OuterVolumeSpecName: "config-volume") pod "de82ba69-8d52-4a72-8162-abf8300513e8" (UID: "de82ba69-8d52-4a72-8162-abf8300513e8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.778278 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de82ba69-8d52-4a72-8162-abf8300513e8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "de82ba69-8d52-4a72-8162-abf8300513e8" (UID: "de82ba69-8d52-4a72-8162-abf8300513e8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.778306 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de82ba69-8d52-4a72-8162-abf8300513e8-kube-api-access-dqwpt" (OuterVolumeSpecName: "kube-api-access-dqwpt") pod "de82ba69-8d52-4a72-8162-abf8300513e8" (UID: "de82ba69-8d52-4a72-8162-abf8300513e8"). InnerVolumeSpecName "kube-api-access-dqwpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.876516 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de82ba69-8d52-4a72-8162-abf8300513e8-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.876550 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqwpt\" (UniqueName: \"kubernetes.io/projected/de82ba69-8d52-4a72-8162-abf8300513e8-kube-api-access-dqwpt\") on node \"crc\" DevicePath \"\"" Oct 06 08:15:02 crc kubenswrapper[4769]: I1006 08:15:02.876564 4769 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/de82ba69-8d52-4a72-8162-abf8300513e8-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 08:15:03 crc kubenswrapper[4769]: I1006 08:15:03.356677 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" event={"ID":"de82ba69-8d52-4a72-8162-abf8300513e8","Type":"ContainerDied","Data":"9298ade620a3d0854d7841020ac51e2f74715b2b14e2cae3f495aec43fad8f33"} Oct 06 08:15:03 crc kubenswrapper[4769]: I1006 08:15:03.356716 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9298ade620a3d0854d7841020ac51e2f74715b2b14e2cae3f495aec43fad8f33" Oct 06 08:15:03 crc kubenswrapper[4769]: I1006 08:15:03.356730 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328975-htkbm" Oct 06 08:15:03 crc kubenswrapper[4769]: I1006 08:15:03.740102 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8"] Oct 06 08:15:03 crc kubenswrapper[4769]: I1006 08:15:03.748845 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328930-5tnc8"] Oct 06 08:15:04 crc kubenswrapper[4769]: I1006 08:15:04.179661 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b02926-d618-4766-91cc-d34118522d58" path="/var/lib/kubelet/pods/b1b02926-d618-4766-91cc-d34118522d58/volumes" Oct 06 08:15:04 crc kubenswrapper[4769]: I1006 08:15:04.180846 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df2e5f95-d641-4620-8048-dee21f4141da" path="/var/lib/kubelet/pods/df2e5f95-d641-4620-8048-dee21f4141da/volumes" Oct 06 08:15:22 crc kubenswrapper[4769]: I1006 08:15:22.245249 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:15:22 crc kubenswrapper[4769]: I1006 08:15:22.245731 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:15:52 crc kubenswrapper[4769]: I1006 08:15:52.245649 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:15:52 crc kubenswrapper[4769]: I1006 08:15:52.246228 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:15:57 crc kubenswrapper[4769]: I1006 08:15:57.554340 4769 scope.go:117] "RemoveContainer" containerID="375271b2362b330c3c2d1eb97bd28f818d7b7fb57e6d3b0c1ab9ef62c97879b9" Oct 06 08:16:22 crc kubenswrapper[4769]: I1006 08:16:22.245816 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:16:22 crc kubenswrapper[4769]: I1006 08:16:22.246618 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:16:22 crc kubenswrapper[4769]: I1006 08:16:22.246733 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 08:16:22 crc kubenswrapper[4769]: I1006 08:16:22.247767 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 08:16:22 crc kubenswrapper[4769]: I1006 08:16:22.247862 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" gracePeriod=600 Oct 06 08:16:22 crc kubenswrapper[4769]: E1006 08:16:22.385596 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:16:23 crc kubenswrapper[4769]: I1006 08:16:23.072093 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" exitCode=0 Oct 06 08:16:23 crc kubenswrapper[4769]: I1006 08:16:23.072136 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1"} Oct 06 08:16:23 crc kubenswrapper[4769]: I1006 08:16:23.072171 4769 scope.go:117] "RemoveContainer" containerID="c57a2dd74af9836792972b3f8b827d061d4c30d911cf38db52862074d0c3f2c4" Oct 06 08:16:23 crc kubenswrapper[4769]: I1006 08:16:23.072865 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:16:23 crc kubenswrapper[4769]: E1006 08:16:23.073221 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:16:36 crc kubenswrapper[4769]: I1006 08:16:36.165828 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:16:36 crc kubenswrapper[4769]: E1006 08:16:36.167059 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:16:51 crc kubenswrapper[4769]: I1006 08:16:51.167151 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:16:51 crc kubenswrapper[4769]: E1006 08:16:51.168794 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:17:02 crc kubenswrapper[4769]: I1006 08:17:02.166262 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:17:02 crc kubenswrapper[4769]: E1006 08:17:02.167008 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:17:16 crc kubenswrapper[4769]: I1006 08:17:16.165704 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:17:16 crc kubenswrapper[4769]: E1006 08:17:16.166406 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:17:31 crc kubenswrapper[4769]: I1006 08:17:31.166958 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:17:31 crc kubenswrapper[4769]: E1006 08:17:31.168238 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:17:44 crc kubenswrapper[4769]: I1006 08:17:44.171993 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:17:44 crc kubenswrapper[4769]: E1006 08:17:44.172661 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:17:56 crc kubenswrapper[4769]: I1006 08:17:56.167604 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:17:56 crc kubenswrapper[4769]: E1006 08:17:56.168369 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:18:06 crc kubenswrapper[4769]: I1006 08:18:06.972325 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6vs6s"] Oct 06 08:18:06 crc kubenswrapper[4769]: E1006 08:18:06.973383 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de82ba69-8d52-4a72-8162-abf8300513e8" containerName="collect-profiles" Oct 06 08:18:06 crc kubenswrapper[4769]: I1006 08:18:06.973398 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="de82ba69-8d52-4a72-8162-abf8300513e8" containerName="collect-profiles" Oct 06 08:18:06 crc kubenswrapper[4769]: E1006 08:18:06.973410 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b02926-d618-4766-91cc-d34118522d58" containerName="extract-utilities" Oct 06 08:18:06 crc kubenswrapper[4769]: I1006 08:18:06.973525 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b02926-d618-4766-91cc-d34118522d58" containerName="extract-utilities" Oct 06 08:18:06 crc kubenswrapper[4769]: E1006 08:18:06.973552 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b02926-d618-4766-91cc-d34118522d58" containerName="extract-content" Oct 06 08:18:06 crc kubenswrapper[4769]: I1006 08:18:06.973559 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b02926-d618-4766-91cc-d34118522d58" containerName="extract-content" Oct 06 08:18:06 crc kubenswrapper[4769]: E1006 08:18:06.973592 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b02926-d618-4766-91cc-d34118522d58" containerName="registry-server" Oct 06 08:18:06 crc kubenswrapper[4769]: I1006 08:18:06.973597 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b02926-d618-4766-91cc-d34118522d58" containerName="registry-server" Oct 06 08:18:06 crc kubenswrapper[4769]: I1006 08:18:06.973827 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b02926-d618-4766-91cc-d34118522d58" containerName="registry-server" Oct 06 08:18:06 crc kubenswrapper[4769]: I1006 08:18:06.973843 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="de82ba69-8d52-4a72-8162-abf8300513e8" containerName="collect-profiles" Oct 06 08:18:06 crc kubenswrapper[4769]: I1006 08:18:06.976634 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:06 crc kubenswrapper[4769]: I1006 08:18:06.990115 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vs6s"] Oct 06 08:18:07 crc kubenswrapper[4769]: I1006 08:18:07.023987 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb3327c-bb24-4d48-9609-430906698ce9-utilities\") pod \"certified-operators-6vs6s\" (UID: \"ceb3327c-bb24-4d48-9609-430906698ce9\") " pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:07 crc kubenswrapper[4769]: I1006 08:18:07.024049 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xxvl\" (UniqueName: \"kubernetes.io/projected/ceb3327c-bb24-4d48-9609-430906698ce9-kube-api-access-2xxvl\") pod \"certified-operators-6vs6s\" (UID: \"ceb3327c-bb24-4d48-9609-430906698ce9\") " pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:07 crc kubenswrapper[4769]: I1006 08:18:07.024128 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb3327c-bb24-4d48-9609-430906698ce9-catalog-content\") pod \"certified-operators-6vs6s\" (UID: \"ceb3327c-bb24-4d48-9609-430906698ce9\") " pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:07 crc kubenswrapper[4769]: I1006 08:18:07.125542 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb3327c-bb24-4d48-9609-430906698ce9-utilities\") pod \"certified-operators-6vs6s\" (UID: \"ceb3327c-bb24-4d48-9609-430906698ce9\") " pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:07 crc kubenswrapper[4769]: I1006 08:18:07.125903 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xxvl\" (UniqueName: \"kubernetes.io/projected/ceb3327c-bb24-4d48-9609-430906698ce9-kube-api-access-2xxvl\") pod \"certified-operators-6vs6s\" (UID: \"ceb3327c-bb24-4d48-9609-430906698ce9\") " pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:07 crc kubenswrapper[4769]: I1006 08:18:07.125959 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb3327c-bb24-4d48-9609-430906698ce9-catalog-content\") pod \"certified-operators-6vs6s\" (UID: \"ceb3327c-bb24-4d48-9609-430906698ce9\") " pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:07 crc kubenswrapper[4769]: I1006 08:18:07.126276 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb3327c-bb24-4d48-9609-430906698ce9-utilities\") pod \"certified-operators-6vs6s\" (UID: \"ceb3327c-bb24-4d48-9609-430906698ce9\") " pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:07 crc kubenswrapper[4769]: I1006 08:18:07.126382 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb3327c-bb24-4d48-9609-430906698ce9-catalog-content\") pod \"certified-operators-6vs6s\" (UID: \"ceb3327c-bb24-4d48-9609-430906698ce9\") " pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:07 crc kubenswrapper[4769]: I1006 08:18:07.153458 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xxvl\" (UniqueName: \"kubernetes.io/projected/ceb3327c-bb24-4d48-9609-430906698ce9-kube-api-access-2xxvl\") pod \"certified-operators-6vs6s\" (UID: \"ceb3327c-bb24-4d48-9609-430906698ce9\") " pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:07 crc kubenswrapper[4769]: I1006 08:18:07.344207 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:07 crc kubenswrapper[4769]: I1006 08:18:07.809135 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vs6s"] Oct 06 08:18:08 crc kubenswrapper[4769]: I1006 08:18:08.030453 4769 generic.go:334] "Generic (PLEG): container finished" podID="ceb3327c-bb24-4d48-9609-430906698ce9" containerID="765d105486efde804783318b7159e0bcb7196e10584e0b68dc7e38389f54c999" exitCode=0 Oct 06 08:18:08 crc kubenswrapper[4769]: I1006 08:18:08.030551 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vs6s" event={"ID":"ceb3327c-bb24-4d48-9609-430906698ce9","Type":"ContainerDied","Data":"765d105486efde804783318b7159e0bcb7196e10584e0b68dc7e38389f54c999"} Oct 06 08:18:08 crc kubenswrapper[4769]: I1006 08:18:08.030815 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vs6s" event={"ID":"ceb3327c-bb24-4d48-9609-430906698ce9","Type":"ContainerStarted","Data":"5d9a740e2abd8e017673b55fd2f5a4c85d0407f31d36014f027b1251a06aafa1"} Oct 06 08:18:08 crc kubenswrapper[4769]: I1006 08:18:08.775741 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fx96k"] Oct 06 08:18:08 crc kubenswrapper[4769]: I1006 08:18:08.783318 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:08 crc kubenswrapper[4769]: I1006 08:18:08.786677 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx96k"] Oct 06 08:18:08 crc kubenswrapper[4769]: I1006 08:18:08.958786 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-utilities\") pod \"redhat-marketplace-fx96k\" (UID: \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\") " pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:08 crc kubenswrapper[4769]: I1006 08:18:08.959161 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdtrp\" (UniqueName: \"kubernetes.io/projected/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-kube-api-access-pdtrp\") pod \"redhat-marketplace-fx96k\" (UID: \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\") " pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:08 crc kubenswrapper[4769]: I1006 08:18:08.959211 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-catalog-content\") pod \"redhat-marketplace-fx96k\" (UID: \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\") " pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:09 crc kubenswrapper[4769]: I1006 08:18:09.039401 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vs6s" event={"ID":"ceb3327c-bb24-4d48-9609-430906698ce9","Type":"ContainerStarted","Data":"5a3db0ec9c7931135d0f383bd6a1772dc690fc85d61e92750ec7a4a3b108eec9"} Oct 06 08:18:09 crc kubenswrapper[4769]: I1006 08:18:09.060352 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-utilities\") pod \"redhat-marketplace-fx96k\" (UID: \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\") " pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:09 crc kubenswrapper[4769]: I1006 08:18:09.060644 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdtrp\" (UniqueName: \"kubernetes.io/projected/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-kube-api-access-pdtrp\") pod \"redhat-marketplace-fx96k\" (UID: \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\") " pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:09 crc kubenswrapper[4769]: I1006 08:18:09.060794 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-catalog-content\") pod \"redhat-marketplace-fx96k\" (UID: \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\") " pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:09 crc kubenswrapper[4769]: I1006 08:18:09.060882 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-utilities\") pod \"redhat-marketplace-fx96k\" (UID: \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\") " pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:09 crc kubenswrapper[4769]: I1006 08:18:09.061237 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-catalog-content\") pod \"redhat-marketplace-fx96k\" (UID: \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\") " pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:09 crc kubenswrapper[4769]: I1006 08:18:09.080617 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdtrp\" (UniqueName: \"kubernetes.io/projected/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-kube-api-access-pdtrp\") pod \"redhat-marketplace-fx96k\" (UID: \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\") " pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:09 crc kubenswrapper[4769]: I1006 08:18:09.107916 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:09 crc kubenswrapper[4769]: I1006 08:18:09.550498 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx96k"] Oct 06 08:18:09 crc kubenswrapper[4769]: W1006 08:18:09.561075 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2394cf98_80d4_4070_a0ce_6f10ff2ddb01.slice/crio-53ed51d016e79f1d112df74070ecbd21eb25b70b2c1e8ea45d10eea1135c28aa WatchSource:0}: Error finding container 53ed51d016e79f1d112df74070ecbd21eb25b70b2c1e8ea45d10eea1135c28aa: Status 404 returned error can't find the container with id 53ed51d016e79f1d112df74070ecbd21eb25b70b2c1e8ea45d10eea1135c28aa Oct 06 08:18:10 crc kubenswrapper[4769]: I1006 08:18:10.052969 4769 generic.go:334] "Generic (PLEG): container finished" podID="ceb3327c-bb24-4d48-9609-430906698ce9" containerID="5a3db0ec9c7931135d0f383bd6a1772dc690fc85d61e92750ec7a4a3b108eec9" exitCode=0 Oct 06 08:18:10 crc kubenswrapper[4769]: I1006 08:18:10.053170 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vs6s" event={"ID":"ceb3327c-bb24-4d48-9609-430906698ce9","Type":"ContainerDied","Data":"5a3db0ec9c7931135d0f383bd6a1772dc690fc85d61e92750ec7a4a3b108eec9"} Oct 06 08:18:10 crc kubenswrapper[4769]: I1006 08:18:10.056804 4769 generic.go:334] "Generic (PLEG): container finished" podID="2394cf98-80d4-4070-a0ce-6f10ff2ddb01" containerID="2adb22b6d370525ad9b1a0799682e937c92b039eee4c5b4438a08606f47ed90a" exitCode=0 Oct 06 08:18:10 crc kubenswrapper[4769]: I1006 08:18:10.056864 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx96k" event={"ID":"2394cf98-80d4-4070-a0ce-6f10ff2ddb01","Type":"ContainerDied","Data":"2adb22b6d370525ad9b1a0799682e937c92b039eee4c5b4438a08606f47ed90a"} Oct 06 08:18:10 crc kubenswrapper[4769]: I1006 08:18:10.056903 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx96k" event={"ID":"2394cf98-80d4-4070-a0ce-6f10ff2ddb01","Type":"ContainerStarted","Data":"53ed51d016e79f1d112df74070ecbd21eb25b70b2c1e8ea45d10eea1135c28aa"} Oct 06 08:18:11 crc kubenswrapper[4769]: I1006 08:18:11.068963 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vs6s" event={"ID":"ceb3327c-bb24-4d48-9609-430906698ce9","Type":"ContainerStarted","Data":"f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796"} Oct 06 08:18:11 crc kubenswrapper[4769]: I1006 08:18:11.097102 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6vs6s" podStartSLOduration=2.690296618 podStartE2EDuration="5.097079966s" podCreationTimestamp="2025-10-06 08:18:06 +0000 UTC" firstStartedPulling="2025-10-06 08:18:08.034654582 +0000 UTC m=+3684.558935729" lastFinishedPulling="2025-10-06 08:18:10.44143793 +0000 UTC m=+3686.965719077" observedRunningTime="2025-10-06 08:18:11.088948912 +0000 UTC m=+3687.613230059" watchObservedRunningTime="2025-10-06 08:18:11.097079966 +0000 UTC m=+3687.621361113" Oct 06 08:18:11 crc kubenswrapper[4769]: I1006 08:18:11.167153 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:18:11 crc kubenswrapper[4769]: E1006 08:18:11.167450 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:18:12 crc kubenswrapper[4769]: I1006 08:18:12.078863 4769 generic.go:334] "Generic (PLEG): container finished" podID="2394cf98-80d4-4070-a0ce-6f10ff2ddb01" containerID="f106818141b5248286a51e999d919defbdb8a397a28e2d387c9d71ca81d371c9" exitCode=0 Oct 06 08:18:12 crc kubenswrapper[4769]: I1006 08:18:12.078975 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx96k" event={"ID":"2394cf98-80d4-4070-a0ce-6f10ff2ddb01","Type":"ContainerDied","Data":"f106818141b5248286a51e999d919defbdb8a397a28e2d387c9d71ca81d371c9"} Oct 06 08:18:13 crc kubenswrapper[4769]: I1006 08:18:13.088584 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx96k" event={"ID":"2394cf98-80d4-4070-a0ce-6f10ff2ddb01","Type":"ContainerStarted","Data":"97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33"} Oct 06 08:18:13 crc kubenswrapper[4769]: I1006 08:18:13.113271 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fx96k" podStartSLOduration=2.538604578 podStartE2EDuration="5.113249364s" podCreationTimestamp="2025-10-06 08:18:08 +0000 UTC" firstStartedPulling="2025-10-06 08:18:10.059743704 +0000 UTC m=+3686.584024861" lastFinishedPulling="2025-10-06 08:18:12.6343885 +0000 UTC m=+3689.158669647" observedRunningTime="2025-10-06 08:18:13.107162547 +0000 UTC m=+3689.631443694" watchObservedRunningTime="2025-10-06 08:18:13.113249364 +0000 UTC m=+3689.637530521" Oct 06 08:18:17 crc kubenswrapper[4769]: I1006 08:18:17.344486 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:17 crc kubenswrapper[4769]: I1006 08:18:17.345672 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:17 crc kubenswrapper[4769]: I1006 08:18:17.390962 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:18 crc kubenswrapper[4769]: I1006 08:18:18.211635 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:18 crc kubenswrapper[4769]: I1006 08:18:18.271243 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vs6s"] Oct 06 08:18:19 crc kubenswrapper[4769]: I1006 08:18:19.108658 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:19 crc kubenswrapper[4769]: I1006 08:18:19.109106 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:19 crc kubenswrapper[4769]: I1006 08:18:19.180130 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:20 crc kubenswrapper[4769]: I1006 08:18:20.157729 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6vs6s" podUID="ceb3327c-bb24-4d48-9609-430906698ce9" containerName="registry-server" containerID="cri-o://f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796" gracePeriod=2 Oct 06 08:18:20 crc kubenswrapper[4769]: I1006 08:18:20.228331 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:20 crc kubenswrapper[4769]: E1006 08:18:20.494534 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podceb3327c_bb24_4d48_9609_430906698ce9.slice/crio-conmon-f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796.scope\": RecentStats: unable to find data in memory cache]" Oct 06 08:18:20 crc kubenswrapper[4769]: I1006 08:18:20.627683 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:20 crc kubenswrapper[4769]: I1006 08:18:20.795406 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xxvl\" (UniqueName: \"kubernetes.io/projected/ceb3327c-bb24-4d48-9609-430906698ce9-kube-api-access-2xxvl\") pod \"ceb3327c-bb24-4d48-9609-430906698ce9\" (UID: \"ceb3327c-bb24-4d48-9609-430906698ce9\") " Oct 06 08:18:20 crc kubenswrapper[4769]: I1006 08:18:20.795509 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb3327c-bb24-4d48-9609-430906698ce9-catalog-content\") pod \"ceb3327c-bb24-4d48-9609-430906698ce9\" (UID: \"ceb3327c-bb24-4d48-9609-430906698ce9\") " Oct 06 08:18:20 crc kubenswrapper[4769]: I1006 08:18:20.795672 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb3327c-bb24-4d48-9609-430906698ce9-utilities\") pod \"ceb3327c-bb24-4d48-9609-430906698ce9\" (UID: \"ceb3327c-bb24-4d48-9609-430906698ce9\") " Oct 06 08:18:20 crc kubenswrapper[4769]: I1006 08:18:20.796837 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb3327c-bb24-4d48-9609-430906698ce9-utilities" (OuterVolumeSpecName: "utilities") pod "ceb3327c-bb24-4d48-9609-430906698ce9" (UID: "ceb3327c-bb24-4d48-9609-430906698ce9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:18:20 crc kubenswrapper[4769]: I1006 08:18:20.803125 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb3327c-bb24-4d48-9609-430906698ce9-kube-api-access-2xxvl" (OuterVolumeSpecName: "kube-api-access-2xxvl") pod "ceb3327c-bb24-4d48-9609-430906698ce9" (UID: "ceb3327c-bb24-4d48-9609-430906698ce9"). InnerVolumeSpecName "kube-api-access-2xxvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:18:20 crc kubenswrapper[4769]: I1006 08:18:20.848645 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb3327c-bb24-4d48-9609-430906698ce9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ceb3327c-bb24-4d48-9609-430906698ce9" (UID: "ceb3327c-bb24-4d48-9609-430906698ce9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:18:20 crc kubenswrapper[4769]: I1006 08:18:20.898004 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xxvl\" (UniqueName: \"kubernetes.io/projected/ceb3327c-bb24-4d48-9609-430906698ce9-kube-api-access-2xxvl\") on node \"crc\" DevicePath \"\"" Oct 06 08:18:20 crc kubenswrapper[4769]: I1006 08:18:20.898045 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ceb3327c-bb24-4d48-9609-430906698ce9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:18:20 crc kubenswrapper[4769]: I1006 08:18:20.898057 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ceb3327c-bb24-4d48-9609-430906698ce9-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.171657 4769 generic.go:334] "Generic (PLEG): container finished" podID="ceb3327c-bb24-4d48-9609-430906698ce9" containerID="f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796" exitCode=0 Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.171725 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vs6s" Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.171740 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vs6s" event={"ID":"ceb3327c-bb24-4d48-9609-430906698ce9","Type":"ContainerDied","Data":"f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796"} Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.171813 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vs6s" event={"ID":"ceb3327c-bb24-4d48-9609-430906698ce9","Type":"ContainerDied","Data":"5d9a740e2abd8e017673b55fd2f5a4c85d0407f31d36014f027b1251a06aafa1"} Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.171845 4769 scope.go:117] "RemoveContainer" containerID="f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796" Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.206280 4769 scope.go:117] "RemoveContainer" containerID="5a3db0ec9c7931135d0f383bd6a1772dc690fc85d61e92750ec7a4a3b108eec9" Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.212735 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6vs6s"] Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.220403 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6vs6s"] Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.227145 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx96k"] Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.242498 4769 scope.go:117] "RemoveContainer" containerID="765d105486efde804783318b7159e0bcb7196e10584e0b68dc7e38389f54c999" Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.280455 4769 scope.go:117] "RemoveContainer" containerID="f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796" Oct 06 08:18:21 crc kubenswrapper[4769]: E1006 08:18:21.281329 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796\": container with ID starting with f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796 not found: ID does not exist" containerID="f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796" Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.281407 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796"} err="failed to get container status \"f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796\": rpc error: code = NotFound desc = could not find container \"f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796\": container with ID starting with f6617e7728d49a00c88b226c8f199a4db37b5282263a103872eeab7e21594796 not found: ID does not exist" Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.281534 4769 scope.go:117] "RemoveContainer" containerID="5a3db0ec9c7931135d0f383bd6a1772dc690fc85d61e92750ec7a4a3b108eec9" Oct 06 08:18:21 crc kubenswrapper[4769]: E1006 08:18:21.282136 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a3db0ec9c7931135d0f383bd6a1772dc690fc85d61e92750ec7a4a3b108eec9\": container with ID starting with 5a3db0ec9c7931135d0f383bd6a1772dc690fc85d61e92750ec7a4a3b108eec9 not found: ID does not exist" containerID="5a3db0ec9c7931135d0f383bd6a1772dc690fc85d61e92750ec7a4a3b108eec9" Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.282207 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a3db0ec9c7931135d0f383bd6a1772dc690fc85d61e92750ec7a4a3b108eec9"} err="failed to get container status \"5a3db0ec9c7931135d0f383bd6a1772dc690fc85d61e92750ec7a4a3b108eec9\": rpc error: code = NotFound desc = could not find container \"5a3db0ec9c7931135d0f383bd6a1772dc690fc85d61e92750ec7a4a3b108eec9\": container with ID starting with 5a3db0ec9c7931135d0f383bd6a1772dc690fc85d61e92750ec7a4a3b108eec9 not found: ID does not exist" Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.282260 4769 scope.go:117] "RemoveContainer" containerID="765d105486efde804783318b7159e0bcb7196e10584e0b68dc7e38389f54c999" Oct 06 08:18:21 crc kubenswrapper[4769]: E1006 08:18:21.282763 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"765d105486efde804783318b7159e0bcb7196e10584e0b68dc7e38389f54c999\": container with ID starting with 765d105486efde804783318b7159e0bcb7196e10584e0b68dc7e38389f54c999 not found: ID does not exist" containerID="765d105486efde804783318b7159e0bcb7196e10584e0b68dc7e38389f54c999" Oct 06 08:18:21 crc kubenswrapper[4769]: I1006 08:18:21.282807 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"765d105486efde804783318b7159e0bcb7196e10584e0b68dc7e38389f54c999"} err="failed to get container status \"765d105486efde804783318b7159e0bcb7196e10584e0b68dc7e38389f54c999\": rpc error: code = NotFound desc = could not find container \"765d105486efde804783318b7159e0bcb7196e10584e0b68dc7e38389f54c999\": container with ID starting with 765d105486efde804783318b7159e0bcb7196e10584e0b68dc7e38389f54c999 not found: ID does not exist" Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.166340 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:18:22 crc kubenswrapper[4769]: E1006 08:18:22.167483 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.178030 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceb3327c-bb24-4d48-9609-430906698ce9" path="/var/lib/kubelet/pods/ceb3327c-bb24-4d48-9609-430906698ce9/volumes" Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.181874 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fx96k" podUID="2394cf98-80d4-4070-a0ce-6f10ff2ddb01" containerName="registry-server" containerID="cri-o://97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33" gracePeriod=2 Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.687743 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.835902 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdtrp\" (UniqueName: \"kubernetes.io/projected/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-kube-api-access-pdtrp\") pod \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\" (UID: \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\") " Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.836036 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-catalog-content\") pod \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\" (UID: \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\") " Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.836102 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-utilities\") pod \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\" (UID: \"2394cf98-80d4-4070-a0ce-6f10ff2ddb01\") " Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.837377 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-utilities" (OuterVolumeSpecName: "utilities") pod "2394cf98-80d4-4070-a0ce-6f10ff2ddb01" (UID: "2394cf98-80d4-4070-a0ce-6f10ff2ddb01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.844692 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-kube-api-access-pdtrp" (OuterVolumeSpecName: "kube-api-access-pdtrp") pod "2394cf98-80d4-4070-a0ce-6f10ff2ddb01" (UID: "2394cf98-80d4-4070-a0ce-6f10ff2ddb01"). InnerVolumeSpecName "kube-api-access-pdtrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.853652 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2394cf98-80d4-4070-a0ce-6f10ff2ddb01" (UID: "2394cf98-80d4-4070-a0ce-6f10ff2ddb01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.939100 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.939136 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:18:22 crc kubenswrapper[4769]: I1006 08:18:22.939153 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdtrp\" (UniqueName: \"kubernetes.io/projected/2394cf98-80d4-4070-a0ce-6f10ff2ddb01-kube-api-access-pdtrp\") on node \"crc\" DevicePath \"\"" Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.194501 4769 generic.go:334] "Generic (PLEG): container finished" podID="2394cf98-80d4-4070-a0ce-6f10ff2ddb01" containerID="97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33" exitCode=0 Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.194579 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx96k" event={"ID":"2394cf98-80d4-4070-a0ce-6f10ff2ddb01","Type":"ContainerDied","Data":"97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33"} Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.194628 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx96k" event={"ID":"2394cf98-80d4-4070-a0ce-6f10ff2ddb01","Type":"ContainerDied","Data":"53ed51d016e79f1d112df74070ecbd21eb25b70b2c1e8ea45d10eea1135c28aa"} Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.194660 4769 scope.go:117] "RemoveContainer" containerID="97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33" Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.194840 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx96k" Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.223130 4769 scope.go:117] "RemoveContainer" containerID="f106818141b5248286a51e999d919defbdb8a397a28e2d387c9d71ca81d371c9" Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.243881 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx96k"] Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.266903 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx96k"] Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.273921 4769 scope.go:117] "RemoveContainer" containerID="2adb22b6d370525ad9b1a0799682e937c92b039eee4c5b4438a08606f47ed90a" Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.294221 4769 scope.go:117] "RemoveContainer" containerID="97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33" Oct 06 08:18:23 crc kubenswrapper[4769]: E1006 08:18:23.294744 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33\": container with ID starting with 97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33 not found: ID does not exist" containerID="97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33" Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.294873 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33"} err="failed to get container status \"97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33\": rpc error: code = NotFound desc = could not find container \"97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33\": container with ID starting with 97f3ec9d51fb046b97645ef3ff89542e18ee7463160395f22dfd12c178519f33 not found: ID does not exist" Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.294974 4769 scope.go:117] "RemoveContainer" containerID="f106818141b5248286a51e999d919defbdb8a397a28e2d387c9d71ca81d371c9" Oct 06 08:18:23 crc kubenswrapper[4769]: E1006 08:18:23.295400 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f106818141b5248286a51e999d919defbdb8a397a28e2d387c9d71ca81d371c9\": container with ID starting with f106818141b5248286a51e999d919defbdb8a397a28e2d387c9d71ca81d371c9 not found: ID does not exist" containerID="f106818141b5248286a51e999d919defbdb8a397a28e2d387c9d71ca81d371c9" Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.295554 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f106818141b5248286a51e999d919defbdb8a397a28e2d387c9d71ca81d371c9"} err="failed to get container status \"f106818141b5248286a51e999d919defbdb8a397a28e2d387c9d71ca81d371c9\": rpc error: code = NotFound desc = could not find container \"f106818141b5248286a51e999d919defbdb8a397a28e2d387c9d71ca81d371c9\": container with ID starting with f106818141b5248286a51e999d919defbdb8a397a28e2d387c9d71ca81d371c9 not found: ID does not exist" Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.295633 4769 scope.go:117] "RemoveContainer" containerID="2adb22b6d370525ad9b1a0799682e937c92b039eee4c5b4438a08606f47ed90a" Oct 06 08:18:23 crc kubenswrapper[4769]: E1006 08:18:23.296115 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2adb22b6d370525ad9b1a0799682e937c92b039eee4c5b4438a08606f47ed90a\": container with ID starting with 2adb22b6d370525ad9b1a0799682e937c92b039eee4c5b4438a08606f47ed90a not found: ID does not exist" containerID="2adb22b6d370525ad9b1a0799682e937c92b039eee4c5b4438a08606f47ed90a" Oct 06 08:18:23 crc kubenswrapper[4769]: I1006 08:18:23.296251 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2adb22b6d370525ad9b1a0799682e937c92b039eee4c5b4438a08606f47ed90a"} err="failed to get container status \"2adb22b6d370525ad9b1a0799682e937c92b039eee4c5b4438a08606f47ed90a\": rpc error: code = NotFound desc = could not find container \"2adb22b6d370525ad9b1a0799682e937c92b039eee4c5b4438a08606f47ed90a\": container with ID starting with 2adb22b6d370525ad9b1a0799682e937c92b039eee4c5b4438a08606f47ed90a not found: ID does not exist" Oct 06 08:18:24 crc kubenswrapper[4769]: I1006 08:18:24.179442 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2394cf98-80d4-4070-a0ce-6f10ff2ddb01" path="/var/lib/kubelet/pods/2394cf98-80d4-4070-a0ce-6f10ff2ddb01/volumes" Oct 06 08:18:35 crc kubenswrapper[4769]: I1006 08:18:35.165973 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:18:35 crc kubenswrapper[4769]: E1006 08:18:35.166822 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:18:47 crc kubenswrapper[4769]: I1006 08:18:47.166538 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:18:47 crc kubenswrapper[4769]: E1006 08:18:47.167950 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:18:58 crc kubenswrapper[4769]: I1006 08:18:58.167366 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:18:58 crc kubenswrapper[4769]: E1006 08:18:58.168155 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:19:10 crc kubenswrapper[4769]: I1006 08:19:10.166223 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:19:10 crc kubenswrapper[4769]: E1006 08:19:10.167145 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:19:23 crc kubenswrapper[4769]: I1006 08:19:23.165801 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:19:23 crc kubenswrapper[4769]: E1006 08:19:23.166884 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:19:38 crc kubenswrapper[4769]: I1006 08:19:38.165759 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:19:38 crc kubenswrapper[4769]: E1006 08:19:38.166510 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:19:51 crc kubenswrapper[4769]: I1006 08:19:51.166871 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:19:51 crc kubenswrapper[4769]: E1006 08:19:51.167857 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:20:06 crc kubenswrapper[4769]: I1006 08:20:06.166232 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:20:06 crc kubenswrapper[4769]: E1006 08:20:06.166860 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:20:18 crc kubenswrapper[4769]: I1006 08:20:18.166802 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:20:18 crc kubenswrapper[4769]: E1006 08:20:18.167755 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:20:31 crc kubenswrapper[4769]: I1006 08:20:31.165550 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:20:31 crc kubenswrapper[4769]: E1006 08:20:31.166414 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:20:42 crc kubenswrapper[4769]: I1006 08:20:42.166880 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:20:42 crc kubenswrapper[4769]: E1006 08:20:42.168497 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:20:53 crc kubenswrapper[4769]: I1006 08:20:53.167562 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:20:53 crc kubenswrapper[4769]: E1006 08:20:53.168909 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:21:04 crc kubenswrapper[4769]: I1006 08:21:04.172164 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:21:04 crc kubenswrapper[4769]: E1006 08:21:04.173011 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:21:16 crc kubenswrapper[4769]: I1006 08:21:16.167084 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:21:16 crc kubenswrapper[4769]: E1006 08:21:16.167959 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:21:29 crc kubenswrapper[4769]: I1006 08:21:29.166004 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:21:29 crc kubenswrapper[4769]: I1006 08:21:29.753062 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"71435847629ec1030e6959c85f024eaee2db5287fab0a4081417f07545f9f7c2"} Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.589140 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-khmp6"] Oct 06 08:23:42 crc kubenswrapper[4769]: E1006 08:23:42.590126 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb3327c-bb24-4d48-9609-430906698ce9" containerName="registry-server" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.590143 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb3327c-bb24-4d48-9609-430906698ce9" containerName="registry-server" Oct 06 08:23:42 crc kubenswrapper[4769]: E1006 08:23:42.590156 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2394cf98-80d4-4070-a0ce-6f10ff2ddb01" containerName="registry-server" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.590164 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2394cf98-80d4-4070-a0ce-6f10ff2ddb01" containerName="registry-server" Oct 06 08:23:42 crc kubenswrapper[4769]: E1006 08:23:42.590186 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2394cf98-80d4-4070-a0ce-6f10ff2ddb01" containerName="extract-utilities" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.590194 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2394cf98-80d4-4070-a0ce-6f10ff2ddb01" containerName="extract-utilities" Oct 06 08:23:42 crc kubenswrapper[4769]: E1006 08:23:42.590205 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb3327c-bb24-4d48-9609-430906698ce9" containerName="extract-utilities" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.590212 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb3327c-bb24-4d48-9609-430906698ce9" containerName="extract-utilities" Oct 06 08:23:42 crc kubenswrapper[4769]: E1006 08:23:42.590242 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2394cf98-80d4-4070-a0ce-6f10ff2ddb01" containerName="extract-content" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.590250 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="2394cf98-80d4-4070-a0ce-6f10ff2ddb01" containerName="extract-content" Oct 06 08:23:42 crc kubenswrapper[4769]: E1006 08:23:42.590262 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb3327c-bb24-4d48-9609-430906698ce9" containerName="extract-content" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.590270 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb3327c-bb24-4d48-9609-430906698ce9" containerName="extract-content" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.590527 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="2394cf98-80d4-4070-a0ce-6f10ff2ddb01" containerName="registry-server" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.590664 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb3327c-bb24-4d48-9609-430906698ce9" containerName="registry-server" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.592249 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.604132 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khmp6"] Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.689158 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnstj\" (UniqueName: \"kubernetes.io/projected/6db6ac6b-f689-46f1-95cd-37e9e679a818-kube-api-access-rnstj\") pod \"community-operators-khmp6\" (UID: \"6db6ac6b-f689-46f1-95cd-37e9e679a818\") " pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.689503 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db6ac6b-f689-46f1-95cd-37e9e679a818-catalog-content\") pod \"community-operators-khmp6\" (UID: \"6db6ac6b-f689-46f1-95cd-37e9e679a818\") " pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.689547 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db6ac6b-f689-46f1-95cd-37e9e679a818-utilities\") pod \"community-operators-khmp6\" (UID: \"6db6ac6b-f689-46f1-95cd-37e9e679a818\") " pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.790905 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnstj\" (UniqueName: \"kubernetes.io/projected/6db6ac6b-f689-46f1-95cd-37e9e679a818-kube-api-access-rnstj\") pod \"community-operators-khmp6\" (UID: \"6db6ac6b-f689-46f1-95cd-37e9e679a818\") " pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.791393 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db6ac6b-f689-46f1-95cd-37e9e679a818-catalog-content\") pod \"community-operators-khmp6\" (UID: \"6db6ac6b-f689-46f1-95cd-37e9e679a818\") " pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.791416 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db6ac6b-f689-46f1-95cd-37e9e679a818-utilities\") pod \"community-operators-khmp6\" (UID: \"6db6ac6b-f689-46f1-95cd-37e9e679a818\") " pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.791916 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db6ac6b-f689-46f1-95cd-37e9e679a818-utilities\") pod \"community-operators-khmp6\" (UID: \"6db6ac6b-f689-46f1-95cd-37e9e679a818\") " pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.792556 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db6ac6b-f689-46f1-95cd-37e9e679a818-catalog-content\") pod \"community-operators-khmp6\" (UID: \"6db6ac6b-f689-46f1-95cd-37e9e679a818\") " pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.819997 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnstj\" (UniqueName: \"kubernetes.io/projected/6db6ac6b-f689-46f1-95cd-37e9e679a818-kube-api-access-rnstj\") pod \"community-operators-khmp6\" (UID: \"6db6ac6b-f689-46f1-95cd-37e9e679a818\") " pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:42 crc kubenswrapper[4769]: I1006 08:23:42.926942 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:43 crc kubenswrapper[4769]: I1006 08:23:43.446580 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-khmp6"] Oct 06 08:23:43 crc kubenswrapper[4769]: I1006 08:23:43.921244 4769 generic.go:334] "Generic (PLEG): container finished" podID="6db6ac6b-f689-46f1-95cd-37e9e679a818" containerID="bb7728238abef405316c10b28b92c28dbf85055514938163984d0016fadf92e9" exitCode=0 Oct 06 08:23:43 crc kubenswrapper[4769]: I1006 08:23:43.921286 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmp6" event={"ID":"6db6ac6b-f689-46f1-95cd-37e9e679a818","Type":"ContainerDied","Data":"bb7728238abef405316c10b28b92c28dbf85055514938163984d0016fadf92e9"} Oct 06 08:23:43 crc kubenswrapper[4769]: I1006 08:23:43.921589 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmp6" event={"ID":"6db6ac6b-f689-46f1-95cd-37e9e679a818","Type":"ContainerStarted","Data":"f2bdce44df6b117698406653f2cbad84fb093c466c66bc0d4c005e49bb0074f7"} Oct 06 08:23:43 crc kubenswrapper[4769]: I1006 08:23:43.924476 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 08:23:45 crc kubenswrapper[4769]: I1006 08:23:45.938898 4769 generic.go:334] "Generic (PLEG): container finished" podID="6db6ac6b-f689-46f1-95cd-37e9e679a818" containerID="e4b16e9c71eae14c309f1bbbe7b6e0b4dee1e7ff2811e73d64b93159cc994bc9" exitCode=0 Oct 06 08:23:45 crc kubenswrapper[4769]: I1006 08:23:45.938957 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmp6" event={"ID":"6db6ac6b-f689-46f1-95cd-37e9e679a818","Type":"ContainerDied","Data":"e4b16e9c71eae14c309f1bbbe7b6e0b4dee1e7ff2811e73d64b93159cc994bc9"} Oct 06 08:23:46 crc kubenswrapper[4769]: I1006 08:23:46.949471 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmp6" event={"ID":"6db6ac6b-f689-46f1-95cd-37e9e679a818","Type":"ContainerStarted","Data":"1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6"} Oct 06 08:23:46 crc kubenswrapper[4769]: I1006 08:23:46.970292 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-khmp6" podStartSLOduration=2.508343039 podStartE2EDuration="4.970275511s" podCreationTimestamp="2025-10-06 08:23:42 +0000 UTC" firstStartedPulling="2025-10-06 08:23:43.924195175 +0000 UTC m=+4020.448476322" lastFinishedPulling="2025-10-06 08:23:46.386127647 +0000 UTC m=+4022.910408794" observedRunningTime="2025-10-06 08:23:46.964969965 +0000 UTC m=+4023.489251112" watchObservedRunningTime="2025-10-06 08:23:46.970275511 +0000 UTC m=+4023.494556678" Oct 06 08:23:52 crc kubenswrapper[4769]: I1006 08:23:52.245213 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:23:52 crc kubenswrapper[4769]: I1006 08:23:52.245757 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:23:52 crc kubenswrapper[4769]: I1006 08:23:52.927403 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:52 crc kubenswrapper[4769]: I1006 08:23:52.927486 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:52 crc kubenswrapper[4769]: I1006 08:23:52.968325 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:53 crc kubenswrapper[4769]: I1006 08:23:53.048666 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:53 crc kubenswrapper[4769]: I1006 08:23:53.202737 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khmp6"] Oct 06 08:23:55 crc kubenswrapper[4769]: I1006 08:23:55.016476 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-khmp6" podUID="6db6ac6b-f689-46f1-95cd-37e9e679a818" containerName="registry-server" containerID="cri-o://1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6" gracePeriod=2 Oct 06 08:23:55 crc kubenswrapper[4769]: I1006 08:23:55.572281 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:55 crc kubenswrapper[4769]: I1006 08:23:55.608662 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db6ac6b-f689-46f1-95cd-37e9e679a818-catalog-content\") pod \"6db6ac6b-f689-46f1-95cd-37e9e679a818\" (UID: \"6db6ac6b-f689-46f1-95cd-37e9e679a818\") " Oct 06 08:23:55 crc kubenswrapper[4769]: I1006 08:23:55.608803 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnstj\" (UniqueName: \"kubernetes.io/projected/6db6ac6b-f689-46f1-95cd-37e9e679a818-kube-api-access-rnstj\") pod \"6db6ac6b-f689-46f1-95cd-37e9e679a818\" (UID: \"6db6ac6b-f689-46f1-95cd-37e9e679a818\") " Oct 06 08:23:55 crc kubenswrapper[4769]: I1006 08:23:55.608871 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db6ac6b-f689-46f1-95cd-37e9e679a818-utilities\") pod \"6db6ac6b-f689-46f1-95cd-37e9e679a818\" (UID: \"6db6ac6b-f689-46f1-95cd-37e9e679a818\") " Oct 06 08:23:55 crc kubenswrapper[4769]: I1006 08:23:55.610059 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db6ac6b-f689-46f1-95cd-37e9e679a818-utilities" (OuterVolumeSpecName: "utilities") pod "6db6ac6b-f689-46f1-95cd-37e9e679a818" (UID: "6db6ac6b-f689-46f1-95cd-37e9e679a818"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:23:55 crc kubenswrapper[4769]: I1006 08:23:55.615524 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6db6ac6b-f689-46f1-95cd-37e9e679a818-kube-api-access-rnstj" (OuterVolumeSpecName: "kube-api-access-rnstj") pod "6db6ac6b-f689-46f1-95cd-37e9e679a818" (UID: "6db6ac6b-f689-46f1-95cd-37e9e679a818"). InnerVolumeSpecName "kube-api-access-rnstj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:23:55 crc kubenswrapper[4769]: I1006 08:23:55.677621 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6db6ac6b-f689-46f1-95cd-37e9e679a818-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6db6ac6b-f689-46f1-95cd-37e9e679a818" (UID: "6db6ac6b-f689-46f1-95cd-37e9e679a818"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:23:55 crc kubenswrapper[4769]: I1006 08:23:55.710982 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnstj\" (UniqueName: \"kubernetes.io/projected/6db6ac6b-f689-46f1-95cd-37e9e679a818-kube-api-access-rnstj\") on node \"crc\" DevicePath \"\"" Oct 06 08:23:55 crc kubenswrapper[4769]: I1006 08:23:55.711013 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6db6ac6b-f689-46f1-95cd-37e9e679a818-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:23:55 crc kubenswrapper[4769]: I1006 08:23:55.711022 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6db6ac6b-f689-46f1-95cd-37e9e679a818-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.027666 4769 generic.go:334] "Generic (PLEG): container finished" podID="6db6ac6b-f689-46f1-95cd-37e9e679a818" containerID="1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6" exitCode=0 Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.027712 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmp6" event={"ID":"6db6ac6b-f689-46f1-95cd-37e9e679a818","Type":"ContainerDied","Data":"1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6"} Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.027742 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-khmp6" event={"ID":"6db6ac6b-f689-46f1-95cd-37e9e679a818","Type":"ContainerDied","Data":"f2bdce44df6b117698406653f2cbad84fb093c466c66bc0d4c005e49bb0074f7"} Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.027761 4769 scope.go:117] "RemoveContainer" containerID="1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6" Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.027910 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-khmp6" Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.056992 4769 scope.go:117] "RemoveContainer" containerID="e4b16e9c71eae14c309f1bbbe7b6e0b4dee1e7ff2811e73d64b93159cc994bc9" Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.065716 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-khmp6"] Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.075272 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-khmp6"] Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.084919 4769 scope.go:117] "RemoveContainer" containerID="bb7728238abef405316c10b28b92c28dbf85055514938163984d0016fadf92e9" Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.130209 4769 scope.go:117] "RemoveContainer" containerID="1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6" Oct 06 08:23:56 crc kubenswrapper[4769]: E1006 08:23:56.130440 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6\": container with ID starting with 1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6 not found: ID does not exist" containerID="1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6" Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.130472 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6"} err="failed to get container status \"1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6\": rpc error: code = NotFound desc = could not find container \"1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6\": container with ID starting with 1bdf54d6a1f830956f58c0e4f2831a03d48e9abb2b09bfd505a61d391c7060f6 not found: ID does not exist" Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.130501 4769 scope.go:117] "RemoveContainer" containerID="e4b16e9c71eae14c309f1bbbe7b6e0b4dee1e7ff2811e73d64b93159cc994bc9" Oct 06 08:23:56 crc kubenswrapper[4769]: E1006 08:23:56.130919 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4b16e9c71eae14c309f1bbbe7b6e0b4dee1e7ff2811e73d64b93159cc994bc9\": container with ID starting with e4b16e9c71eae14c309f1bbbe7b6e0b4dee1e7ff2811e73d64b93159cc994bc9 not found: ID does not exist" containerID="e4b16e9c71eae14c309f1bbbe7b6e0b4dee1e7ff2811e73d64b93159cc994bc9" Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.130964 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4b16e9c71eae14c309f1bbbe7b6e0b4dee1e7ff2811e73d64b93159cc994bc9"} err="failed to get container status \"e4b16e9c71eae14c309f1bbbe7b6e0b4dee1e7ff2811e73d64b93159cc994bc9\": rpc error: code = NotFound desc = could not find container \"e4b16e9c71eae14c309f1bbbe7b6e0b4dee1e7ff2811e73d64b93159cc994bc9\": container with ID starting with e4b16e9c71eae14c309f1bbbe7b6e0b4dee1e7ff2811e73d64b93159cc994bc9 not found: ID does not exist" Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.130991 4769 scope.go:117] "RemoveContainer" containerID="bb7728238abef405316c10b28b92c28dbf85055514938163984d0016fadf92e9" Oct 06 08:23:56 crc kubenswrapper[4769]: E1006 08:23:56.131493 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb7728238abef405316c10b28b92c28dbf85055514938163984d0016fadf92e9\": container with ID starting with bb7728238abef405316c10b28b92c28dbf85055514938163984d0016fadf92e9 not found: ID does not exist" containerID="bb7728238abef405316c10b28b92c28dbf85055514938163984d0016fadf92e9" Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.131514 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb7728238abef405316c10b28b92c28dbf85055514938163984d0016fadf92e9"} err="failed to get container status \"bb7728238abef405316c10b28b92c28dbf85055514938163984d0016fadf92e9\": rpc error: code = NotFound desc = could not find container \"bb7728238abef405316c10b28b92c28dbf85055514938163984d0016fadf92e9\": container with ID starting with bb7728238abef405316c10b28b92c28dbf85055514938163984d0016fadf92e9 not found: ID does not exist" Oct 06 08:23:56 crc kubenswrapper[4769]: I1006 08:23:56.178279 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6db6ac6b-f689-46f1-95cd-37e9e679a818" path="/var/lib/kubelet/pods/6db6ac6b-f689-46f1-95cd-37e9e679a818/volumes" Oct 06 08:24:22 crc kubenswrapper[4769]: I1006 08:24:22.245489 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:24:22 crc kubenswrapper[4769]: I1006 08:24:22.245988 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:24:52 crc kubenswrapper[4769]: I1006 08:24:52.245385 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:24:52 crc kubenswrapper[4769]: I1006 08:24:52.245889 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:24:52 crc kubenswrapper[4769]: I1006 08:24:52.245936 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 08:24:52 crc kubenswrapper[4769]: I1006 08:24:52.246627 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71435847629ec1030e6959c85f024eaee2db5287fab0a4081417f07545f9f7c2"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 08:24:52 crc kubenswrapper[4769]: I1006 08:24:52.246678 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://71435847629ec1030e6959c85f024eaee2db5287fab0a4081417f07545f9f7c2" gracePeriod=600 Oct 06 08:24:52 crc kubenswrapper[4769]: I1006 08:24:52.539768 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="71435847629ec1030e6959c85f024eaee2db5287fab0a4081417f07545f9f7c2" exitCode=0 Oct 06 08:24:52 crc kubenswrapper[4769]: I1006 08:24:52.539844 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"71435847629ec1030e6959c85f024eaee2db5287fab0a4081417f07545f9f7c2"} Oct 06 08:24:52 crc kubenswrapper[4769]: I1006 08:24:52.540227 4769 scope.go:117] "RemoveContainer" containerID="e9638120a191de4ef76e8861decc0e15a0d88e3a651ef878faa69d4e0922adb1" Oct 06 08:24:53 crc kubenswrapper[4769]: I1006 08:24:53.549850 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609"} Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.073067 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dxt7x"] Oct 06 08:25:16 crc kubenswrapper[4769]: E1006 08:25:16.074259 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db6ac6b-f689-46f1-95cd-37e9e679a818" containerName="extract-utilities" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.074278 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db6ac6b-f689-46f1-95cd-37e9e679a818" containerName="extract-utilities" Oct 06 08:25:16 crc kubenswrapper[4769]: E1006 08:25:16.074288 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db6ac6b-f689-46f1-95cd-37e9e679a818" containerName="registry-server" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.074295 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db6ac6b-f689-46f1-95cd-37e9e679a818" containerName="registry-server" Oct 06 08:25:16 crc kubenswrapper[4769]: E1006 08:25:16.074351 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6db6ac6b-f689-46f1-95cd-37e9e679a818" containerName="extract-content" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.074361 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="6db6ac6b-f689-46f1-95cd-37e9e679a818" containerName="extract-content" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.074605 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="6db6ac6b-f689-46f1-95cd-37e9e679a818" containerName="registry-server" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.076801 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.082911 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxt7x"] Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.243998 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bf42bd-7b1a-414f-9f80-3b77d646ae70-catalog-content\") pod \"redhat-operators-dxt7x\" (UID: \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\") " pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.244065 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bf42bd-7b1a-414f-9f80-3b77d646ae70-utilities\") pod \"redhat-operators-dxt7x\" (UID: \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\") " pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.244135 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5kd8\" (UniqueName: \"kubernetes.io/projected/39bf42bd-7b1a-414f-9f80-3b77d646ae70-kube-api-access-f5kd8\") pod \"redhat-operators-dxt7x\" (UID: \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\") " pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.345823 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bf42bd-7b1a-414f-9f80-3b77d646ae70-catalog-content\") pod \"redhat-operators-dxt7x\" (UID: \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\") " pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.345895 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bf42bd-7b1a-414f-9f80-3b77d646ae70-utilities\") pod \"redhat-operators-dxt7x\" (UID: \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\") " pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.345945 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5kd8\" (UniqueName: \"kubernetes.io/projected/39bf42bd-7b1a-414f-9f80-3b77d646ae70-kube-api-access-f5kd8\") pod \"redhat-operators-dxt7x\" (UID: \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\") " pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.346459 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bf42bd-7b1a-414f-9f80-3b77d646ae70-catalog-content\") pod \"redhat-operators-dxt7x\" (UID: \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\") " pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.346526 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bf42bd-7b1a-414f-9f80-3b77d646ae70-utilities\") pod \"redhat-operators-dxt7x\" (UID: \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\") " pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.365337 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5kd8\" (UniqueName: \"kubernetes.io/projected/39bf42bd-7b1a-414f-9f80-3b77d646ae70-kube-api-access-f5kd8\") pod \"redhat-operators-dxt7x\" (UID: \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\") " pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.412120 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:16 crc kubenswrapper[4769]: I1006 08:25:16.941173 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxt7x"] Oct 06 08:25:17 crc kubenswrapper[4769]: I1006 08:25:17.769215 4769 generic.go:334] "Generic (PLEG): container finished" podID="39bf42bd-7b1a-414f-9f80-3b77d646ae70" containerID="aa429926496c947def9a759ea19c8d7b4b5f437098f22dd2620eccf886af6e5d" exitCode=0 Oct 06 08:25:17 crc kubenswrapper[4769]: I1006 08:25:17.769279 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxt7x" event={"ID":"39bf42bd-7b1a-414f-9f80-3b77d646ae70","Type":"ContainerDied","Data":"aa429926496c947def9a759ea19c8d7b4b5f437098f22dd2620eccf886af6e5d"} Oct 06 08:25:17 crc kubenswrapper[4769]: I1006 08:25:17.769555 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxt7x" event={"ID":"39bf42bd-7b1a-414f-9f80-3b77d646ae70","Type":"ContainerStarted","Data":"1173667b0f8a259b7e8362e8797433e497ce1d7d9058232082a4202366d017bf"} Oct 06 08:25:19 crc kubenswrapper[4769]: I1006 08:25:19.790500 4769 generic.go:334] "Generic (PLEG): container finished" podID="39bf42bd-7b1a-414f-9f80-3b77d646ae70" containerID="01fb78a7d89150b9d1b9e6d9fd69c3668e73d325bdea9e6181e7e25f66b48379" exitCode=0 Oct 06 08:25:19 crc kubenswrapper[4769]: I1006 08:25:19.790565 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxt7x" event={"ID":"39bf42bd-7b1a-414f-9f80-3b77d646ae70","Type":"ContainerDied","Data":"01fb78a7d89150b9d1b9e6d9fd69c3668e73d325bdea9e6181e7e25f66b48379"} Oct 06 08:25:20 crc kubenswrapper[4769]: I1006 08:25:20.803041 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxt7x" event={"ID":"39bf42bd-7b1a-414f-9f80-3b77d646ae70","Type":"ContainerStarted","Data":"570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659"} Oct 06 08:25:20 crc kubenswrapper[4769]: I1006 08:25:20.828205 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dxt7x" podStartSLOduration=2.348504863 podStartE2EDuration="4.828187236s" podCreationTimestamp="2025-10-06 08:25:16 +0000 UTC" firstStartedPulling="2025-10-06 08:25:17.772620505 +0000 UTC m=+4114.296901652" lastFinishedPulling="2025-10-06 08:25:20.252302878 +0000 UTC m=+4116.776584025" observedRunningTime="2025-10-06 08:25:20.820998718 +0000 UTC m=+4117.345279865" watchObservedRunningTime="2025-10-06 08:25:20.828187236 +0000 UTC m=+4117.352468383" Oct 06 08:25:26 crc kubenswrapper[4769]: I1006 08:25:26.413652 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:26 crc kubenswrapper[4769]: I1006 08:25:26.416248 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:26 crc kubenswrapper[4769]: I1006 08:25:26.472791 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:26 crc kubenswrapper[4769]: I1006 08:25:26.915923 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:26 crc kubenswrapper[4769]: I1006 08:25:26.967817 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxt7x"] Oct 06 08:25:28 crc kubenswrapper[4769]: I1006 08:25:28.870145 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dxt7x" podUID="39bf42bd-7b1a-414f-9f80-3b77d646ae70" containerName="registry-server" containerID="cri-o://570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659" gracePeriod=2 Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.332708 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.499573 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bf42bd-7b1a-414f-9f80-3b77d646ae70-catalog-content\") pod \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\" (UID: \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\") " Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.499906 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5kd8\" (UniqueName: \"kubernetes.io/projected/39bf42bd-7b1a-414f-9f80-3b77d646ae70-kube-api-access-f5kd8\") pod \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\" (UID: \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\") " Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.499935 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bf42bd-7b1a-414f-9f80-3b77d646ae70-utilities\") pod \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\" (UID: \"39bf42bd-7b1a-414f-9f80-3b77d646ae70\") " Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.501004 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39bf42bd-7b1a-414f-9f80-3b77d646ae70-utilities" (OuterVolumeSpecName: "utilities") pod "39bf42bd-7b1a-414f-9f80-3b77d646ae70" (UID: "39bf42bd-7b1a-414f-9f80-3b77d646ae70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.505102 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bf42bd-7b1a-414f-9f80-3b77d646ae70-kube-api-access-f5kd8" (OuterVolumeSpecName: "kube-api-access-f5kd8") pod "39bf42bd-7b1a-414f-9f80-3b77d646ae70" (UID: "39bf42bd-7b1a-414f-9f80-3b77d646ae70"). InnerVolumeSpecName "kube-api-access-f5kd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.602413 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bf42bd-7b1a-414f-9f80-3b77d646ae70-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.602496 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5kd8\" (UniqueName: \"kubernetes.io/projected/39bf42bd-7b1a-414f-9f80-3b77d646ae70-kube-api-access-f5kd8\") on node \"crc\" DevicePath \"\"" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.623026 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39bf42bd-7b1a-414f-9f80-3b77d646ae70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39bf42bd-7b1a-414f-9f80-3b77d646ae70" (UID: "39bf42bd-7b1a-414f-9f80-3b77d646ae70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.704588 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bf42bd-7b1a-414f-9f80-3b77d646ae70-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.889469 4769 generic.go:334] "Generic (PLEG): container finished" podID="39bf42bd-7b1a-414f-9f80-3b77d646ae70" containerID="570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659" exitCode=0 Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.889517 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxt7x" event={"ID":"39bf42bd-7b1a-414f-9f80-3b77d646ae70","Type":"ContainerDied","Data":"570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659"} Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.889552 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxt7x" event={"ID":"39bf42bd-7b1a-414f-9f80-3b77d646ae70","Type":"ContainerDied","Data":"1173667b0f8a259b7e8362e8797433e497ce1d7d9058232082a4202366d017bf"} Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.889543 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxt7x" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.889574 4769 scope.go:117] "RemoveContainer" containerID="570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.907961 4769 scope.go:117] "RemoveContainer" containerID="01fb78a7d89150b9d1b9e6d9fd69c3668e73d325bdea9e6181e7e25f66b48379" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.919704 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxt7x"] Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.926106 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dxt7x"] Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.948136 4769 scope.go:117] "RemoveContainer" containerID="aa429926496c947def9a759ea19c8d7b4b5f437098f22dd2620eccf886af6e5d" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.983267 4769 scope.go:117] "RemoveContainer" containerID="570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659" Oct 06 08:25:29 crc kubenswrapper[4769]: E1006 08:25:29.983732 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659\": container with ID starting with 570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659 not found: ID does not exist" containerID="570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.983782 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659"} err="failed to get container status \"570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659\": rpc error: code = NotFound desc = could not find container \"570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659\": container with ID starting with 570895b6e2bee0bd71ac250b538e8bd7f2a20e27da954f5b0e61be9706d6a659 not found: ID does not exist" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.983812 4769 scope.go:117] "RemoveContainer" containerID="01fb78a7d89150b9d1b9e6d9fd69c3668e73d325bdea9e6181e7e25f66b48379" Oct 06 08:25:29 crc kubenswrapper[4769]: E1006 08:25:29.984393 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01fb78a7d89150b9d1b9e6d9fd69c3668e73d325bdea9e6181e7e25f66b48379\": container with ID starting with 01fb78a7d89150b9d1b9e6d9fd69c3668e73d325bdea9e6181e7e25f66b48379 not found: ID does not exist" containerID="01fb78a7d89150b9d1b9e6d9fd69c3668e73d325bdea9e6181e7e25f66b48379" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.984478 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01fb78a7d89150b9d1b9e6d9fd69c3668e73d325bdea9e6181e7e25f66b48379"} err="failed to get container status \"01fb78a7d89150b9d1b9e6d9fd69c3668e73d325bdea9e6181e7e25f66b48379\": rpc error: code = NotFound desc = could not find container \"01fb78a7d89150b9d1b9e6d9fd69c3668e73d325bdea9e6181e7e25f66b48379\": container with ID starting with 01fb78a7d89150b9d1b9e6d9fd69c3668e73d325bdea9e6181e7e25f66b48379 not found: ID does not exist" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.984509 4769 scope.go:117] "RemoveContainer" containerID="aa429926496c947def9a759ea19c8d7b4b5f437098f22dd2620eccf886af6e5d" Oct 06 08:25:29 crc kubenswrapper[4769]: E1006 08:25:29.984856 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa429926496c947def9a759ea19c8d7b4b5f437098f22dd2620eccf886af6e5d\": container with ID starting with aa429926496c947def9a759ea19c8d7b4b5f437098f22dd2620eccf886af6e5d not found: ID does not exist" containerID="aa429926496c947def9a759ea19c8d7b4b5f437098f22dd2620eccf886af6e5d" Oct 06 08:25:29 crc kubenswrapper[4769]: I1006 08:25:29.984887 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa429926496c947def9a759ea19c8d7b4b5f437098f22dd2620eccf886af6e5d"} err="failed to get container status \"aa429926496c947def9a759ea19c8d7b4b5f437098f22dd2620eccf886af6e5d\": rpc error: code = NotFound desc = could not find container \"aa429926496c947def9a759ea19c8d7b4b5f437098f22dd2620eccf886af6e5d\": container with ID starting with aa429926496c947def9a759ea19c8d7b4b5f437098f22dd2620eccf886af6e5d not found: ID does not exist" Oct 06 08:25:30 crc kubenswrapper[4769]: I1006 08:25:30.181776 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39bf42bd-7b1a-414f-9f80-3b77d646ae70" path="/var/lib/kubelet/pods/39bf42bd-7b1a-414f-9f80-3b77d646ae70/volumes" Oct 06 08:26:52 crc kubenswrapper[4769]: I1006 08:26:52.246193 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:26:52 crc kubenswrapper[4769]: I1006 08:26:52.247512 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:27:22 crc kubenswrapper[4769]: I1006 08:27:22.245334 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:27:22 crc kubenswrapper[4769]: I1006 08:27:22.245872 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:27:52 crc kubenswrapper[4769]: I1006 08:27:52.245852 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:27:52 crc kubenswrapper[4769]: I1006 08:27:52.246509 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:27:52 crc kubenswrapper[4769]: I1006 08:27:52.246551 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 08:27:52 crc kubenswrapper[4769]: I1006 08:27:52.247210 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 08:27:52 crc kubenswrapper[4769]: I1006 08:27:52.247272 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" gracePeriod=600 Oct 06 08:27:52 crc kubenswrapper[4769]: E1006 08:27:52.381121 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:27:53 crc kubenswrapper[4769]: I1006 08:27:53.059890 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" exitCode=0 Oct 06 08:27:53 crc kubenswrapper[4769]: I1006 08:27:53.059962 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609"} Oct 06 08:27:53 crc kubenswrapper[4769]: I1006 08:27:53.060250 4769 scope.go:117] "RemoveContainer" containerID="71435847629ec1030e6959c85f024eaee2db5287fab0a4081417f07545f9f7c2" Oct 06 08:27:53 crc kubenswrapper[4769]: I1006 08:27:53.060850 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:27:53 crc kubenswrapper[4769]: E1006 08:27:53.061123 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:28:06 crc kubenswrapper[4769]: I1006 08:28:06.166768 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:28:06 crc kubenswrapper[4769]: E1006 08:28:06.167667 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:28:17 crc kubenswrapper[4769]: I1006 08:28:17.166470 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:28:17 crc kubenswrapper[4769]: E1006 08:28:17.167147 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:28:30 crc kubenswrapper[4769]: I1006 08:28:30.166645 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:28:30 crc kubenswrapper[4769]: E1006 08:28:30.168855 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:28:45 crc kubenswrapper[4769]: I1006 08:28:45.168046 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:28:45 crc kubenswrapper[4769]: E1006 08:28:45.168703 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.164305 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kk4nt"] Oct 06 08:28:48 crc kubenswrapper[4769]: E1006 08:28:48.165363 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bf42bd-7b1a-414f-9f80-3b77d646ae70" containerName="registry-server" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.165377 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bf42bd-7b1a-414f-9f80-3b77d646ae70" containerName="registry-server" Oct 06 08:28:48 crc kubenswrapper[4769]: E1006 08:28:48.165400 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bf42bd-7b1a-414f-9f80-3b77d646ae70" containerName="extract-utilities" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.165406 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bf42bd-7b1a-414f-9f80-3b77d646ae70" containerName="extract-utilities" Oct 06 08:28:48 crc kubenswrapper[4769]: E1006 08:28:48.165437 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bf42bd-7b1a-414f-9f80-3b77d646ae70" containerName="extract-content" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.165446 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bf42bd-7b1a-414f-9f80-3b77d646ae70" containerName="extract-content" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.165666 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bf42bd-7b1a-414f-9f80-3b77d646ae70" containerName="registry-server" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.167219 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.192711 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kk4nt"] Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.361154 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbjp9\" (UniqueName: \"kubernetes.io/projected/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-kube-api-access-jbjp9\") pod \"certified-operators-kk4nt\" (UID: \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\") " pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.362369 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-utilities\") pod \"certified-operators-kk4nt\" (UID: \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\") " pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.362414 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-catalog-content\") pod \"certified-operators-kk4nt\" (UID: \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\") " pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.463811 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbjp9\" (UniqueName: \"kubernetes.io/projected/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-kube-api-access-jbjp9\") pod \"certified-operators-kk4nt\" (UID: \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\") " pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.463898 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-utilities\") pod \"certified-operators-kk4nt\" (UID: \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\") " pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.463930 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-catalog-content\") pod \"certified-operators-kk4nt\" (UID: \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\") " pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.464447 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-catalog-content\") pod \"certified-operators-kk4nt\" (UID: \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\") " pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.464535 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-utilities\") pod \"certified-operators-kk4nt\" (UID: \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\") " pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.494768 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbjp9\" (UniqueName: \"kubernetes.io/projected/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-kube-api-access-jbjp9\") pod \"certified-operators-kk4nt\" (UID: \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\") " pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.502601 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:48 crc kubenswrapper[4769]: I1006 08:28:48.966527 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kk4nt"] Oct 06 08:28:49 crc kubenswrapper[4769]: I1006 08:28:49.587743 4769 generic.go:334] "Generic (PLEG): container finished" podID="85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" containerID="b8dcdabee4d1b8171a0f69a1d208222a1f151bfc7436ac7f2a83148b4ae2c65f" exitCode=0 Oct 06 08:28:49 crc kubenswrapper[4769]: I1006 08:28:49.587999 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk4nt" event={"ID":"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6","Type":"ContainerDied","Data":"b8dcdabee4d1b8171a0f69a1d208222a1f151bfc7436ac7f2a83148b4ae2c65f"} Oct 06 08:28:49 crc kubenswrapper[4769]: I1006 08:28:49.588106 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk4nt" event={"ID":"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6","Type":"ContainerStarted","Data":"b4a081ab8a292b2a6aac6d4f7fba8547a502a842b9cac7075960474d6dc1fc2d"} Oct 06 08:28:49 crc kubenswrapper[4769]: I1006 08:28:49.592843 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 08:28:50 crc kubenswrapper[4769]: I1006 08:28:50.597579 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk4nt" event={"ID":"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6","Type":"ContainerStarted","Data":"44caf508c25fb7a28eefb844b39bf9870c023e914acab98aa6c1ff310cfa9f1b"} Oct 06 08:28:51 crc kubenswrapper[4769]: I1006 08:28:51.606577 4769 generic.go:334] "Generic (PLEG): container finished" podID="85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" containerID="44caf508c25fb7a28eefb844b39bf9870c023e914acab98aa6c1ff310cfa9f1b" exitCode=0 Oct 06 08:28:51 crc kubenswrapper[4769]: I1006 08:28:51.606621 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk4nt" event={"ID":"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6","Type":"ContainerDied","Data":"44caf508c25fb7a28eefb844b39bf9870c023e914acab98aa6c1ff310cfa9f1b"} Oct 06 08:28:52 crc kubenswrapper[4769]: I1006 08:28:52.615164 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk4nt" event={"ID":"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6","Type":"ContainerStarted","Data":"cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e"} Oct 06 08:28:52 crc kubenswrapper[4769]: I1006 08:28:52.630433 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kk4nt" podStartSLOduration=2.02446633 podStartE2EDuration="4.630398543s" podCreationTimestamp="2025-10-06 08:28:48 +0000 UTC" firstStartedPulling="2025-10-06 08:28:49.592627311 +0000 UTC m=+4326.116908458" lastFinishedPulling="2025-10-06 08:28:52.198559524 +0000 UTC m=+4328.722840671" observedRunningTime="2025-10-06 08:28:52.6285002 +0000 UTC m=+4329.152781357" watchObservedRunningTime="2025-10-06 08:28:52.630398543 +0000 UTC m=+4329.154679690" Oct 06 08:28:56 crc kubenswrapper[4769]: I1006 08:28:56.166292 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:28:56 crc kubenswrapper[4769]: E1006 08:28:56.167011 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:28:58 crc kubenswrapper[4769]: I1006 08:28:58.504042 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:58 crc kubenswrapper[4769]: I1006 08:28:58.504402 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:58 crc kubenswrapper[4769]: I1006 08:28:58.550356 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:58 crc kubenswrapper[4769]: I1006 08:28:58.705607 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:28:58 crc kubenswrapper[4769]: I1006 08:28:58.789400 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kk4nt"] Oct 06 08:29:00 crc kubenswrapper[4769]: I1006 08:29:00.689548 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kk4nt" podUID="85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" containerName="registry-server" containerID="cri-o://cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e" gracePeriod=2 Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.207671 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkkh"] Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.210110 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.217038 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkkh"] Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.237669 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbg48\" (UniqueName: \"kubernetes.io/projected/4e294b71-a9ad-4c32-b4cb-4eb73c886467-kube-api-access-qbg48\") pod \"redhat-marketplace-4vkkh\" (UID: \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\") " pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.237842 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e294b71-a9ad-4c32-b4cb-4eb73c886467-utilities\") pod \"redhat-marketplace-4vkkh\" (UID: \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\") " pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.237884 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e294b71-a9ad-4c32-b4cb-4eb73c886467-catalog-content\") pod \"redhat-marketplace-4vkkh\" (UID: \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\") " pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.340067 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbg48\" (UniqueName: \"kubernetes.io/projected/4e294b71-a9ad-4c32-b4cb-4eb73c886467-kube-api-access-qbg48\") pod \"redhat-marketplace-4vkkh\" (UID: \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\") " pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.340219 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e294b71-a9ad-4c32-b4cb-4eb73c886467-utilities\") pod \"redhat-marketplace-4vkkh\" (UID: \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\") " pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.340257 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e294b71-a9ad-4c32-b4cb-4eb73c886467-catalog-content\") pod \"redhat-marketplace-4vkkh\" (UID: \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\") " pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.341158 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e294b71-a9ad-4c32-b4cb-4eb73c886467-utilities\") pod \"redhat-marketplace-4vkkh\" (UID: \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\") " pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.341196 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e294b71-a9ad-4c32-b4cb-4eb73c886467-catalog-content\") pod \"redhat-marketplace-4vkkh\" (UID: \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\") " pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.365192 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbg48\" (UniqueName: \"kubernetes.io/projected/4e294b71-a9ad-4c32-b4cb-4eb73c886467-kube-api-access-qbg48\") pod \"redhat-marketplace-4vkkh\" (UID: \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\") " pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.461170 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.529925 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.542992 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-catalog-content\") pod \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\" (UID: \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\") " Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.543042 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbjp9\" (UniqueName: \"kubernetes.io/projected/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-kube-api-access-jbjp9\") pod \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\" (UID: \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\") " Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.543259 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-utilities\") pod \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\" (UID: \"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6\") " Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.544079 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-utilities" (OuterVolumeSpecName: "utilities") pod "85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" (UID: "85c4cc42-b3f2-4722-aede-2dbe9b4df3a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.557571 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-kube-api-access-jbjp9" (OuterVolumeSpecName: "kube-api-access-jbjp9") pod "85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" (UID: "85c4cc42-b3f2-4722-aede-2dbe9b4df3a6"). InnerVolumeSpecName "kube-api-access-jbjp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.646096 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbjp9\" (UniqueName: \"kubernetes.io/projected/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-kube-api-access-jbjp9\") on node \"crc\" DevicePath \"\"" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.646447 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.672384 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" (UID: "85c4cc42-b3f2-4722-aede-2dbe9b4df3a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.698541 4769 generic.go:334] "Generic (PLEG): container finished" podID="85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" containerID="cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e" exitCode=0 Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.698577 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk4nt" event={"ID":"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6","Type":"ContainerDied","Data":"cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e"} Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.698602 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kk4nt" event={"ID":"85c4cc42-b3f2-4722-aede-2dbe9b4df3a6","Type":"ContainerDied","Data":"b4a081ab8a292b2a6aac6d4f7fba8547a502a842b9cac7075960474d6dc1fc2d"} Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.698617 4769 scope.go:117] "RemoveContainer" containerID="cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.698736 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kk4nt" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.738149 4769 scope.go:117] "RemoveContainer" containerID="44caf508c25fb7a28eefb844b39bf9870c023e914acab98aa6c1ff310cfa9f1b" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.739056 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kk4nt"] Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.747240 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kk4nt"] Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.747803 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.757015 4769 scope.go:117] "RemoveContainer" containerID="b8dcdabee4d1b8171a0f69a1d208222a1f151bfc7436ac7f2a83148b4ae2c65f" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.781177 4769 scope.go:117] "RemoveContainer" containerID="cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e" Oct 06 08:29:01 crc kubenswrapper[4769]: E1006 08:29:01.782206 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e\": container with ID starting with cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e not found: ID does not exist" containerID="cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.782265 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e"} err="failed to get container status \"cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e\": rpc error: code = NotFound desc = could not find container \"cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e\": container with ID starting with cd1e9fd24e39e67162495a24e48d748f9faccedb308807e2550d7da8b288b45e not found: ID does not exist" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.782299 4769 scope.go:117] "RemoveContainer" containerID="44caf508c25fb7a28eefb844b39bf9870c023e914acab98aa6c1ff310cfa9f1b" Oct 06 08:29:01 crc kubenswrapper[4769]: E1006 08:29:01.782787 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44caf508c25fb7a28eefb844b39bf9870c023e914acab98aa6c1ff310cfa9f1b\": container with ID starting with 44caf508c25fb7a28eefb844b39bf9870c023e914acab98aa6c1ff310cfa9f1b not found: ID does not exist" containerID="44caf508c25fb7a28eefb844b39bf9870c023e914acab98aa6c1ff310cfa9f1b" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.782818 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44caf508c25fb7a28eefb844b39bf9870c023e914acab98aa6c1ff310cfa9f1b"} err="failed to get container status \"44caf508c25fb7a28eefb844b39bf9870c023e914acab98aa6c1ff310cfa9f1b\": rpc error: code = NotFound desc = could not find container \"44caf508c25fb7a28eefb844b39bf9870c023e914acab98aa6c1ff310cfa9f1b\": container with ID starting with 44caf508c25fb7a28eefb844b39bf9870c023e914acab98aa6c1ff310cfa9f1b not found: ID does not exist" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.782836 4769 scope.go:117] "RemoveContainer" containerID="b8dcdabee4d1b8171a0f69a1d208222a1f151bfc7436ac7f2a83148b4ae2c65f" Oct 06 08:29:01 crc kubenswrapper[4769]: E1006 08:29:01.783380 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8dcdabee4d1b8171a0f69a1d208222a1f151bfc7436ac7f2a83148b4ae2c65f\": container with ID starting with b8dcdabee4d1b8171a0f69a1d208222a1f151bfc7436ac7f2a83148b4ae2c65f not found: ID does not exist" containerID="b8dcdabee4d1b8171a0f69a1d208222a1f151bfc7436ac7f2a83148b4ae2c65f" Oct 06 08:29:01 crc kubenswrapper[4769]: I1006 08:29:01.783408 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8dcdabee4d1b8171a0f69a1d208222a1f151bfc7436ac7f2a83148b4ae2c65f"} err="failed to get container status \"b8dcdabee4d1b8171a0f69a1d208222a1f151bfc7436ac7f2a83148b4ae2c65f\": rpc error: code = NotFound desc = could not find container \"b8dcdabee4d1b8171a0f69a1d208222a1f151bfc7436ac7f2a83148b4ae2c65f\": container with ID starting with b8dcdabee4d1b8171a0f69a1d208222a1f151bfc7436ac7f2a83148b4ae2c65f not found: ID does not exist" Oct 06 08:29:02 crc kubenswrapper[4769]: I1006 08:29:02.031219 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkkh"] Oct 06 08:29:02 crc kubenswrapper[4769]: W1006 08:29:02.047597 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e294b71_a9ad_4c32_b4cb_4eb73c886467.slice/crio-d39b86421d0d4501185c14d77a5cf2e083aeee3830e68fdfb1349771173c020b WatchSource:0}: Error finding container d39b86421d0d4501185c14d77a5cf2e083aeee3830e68fdfb1349771173c020b: Status 404 returned error can't find the container with id d39b86421d0d4501185c14d77a5cf2e083aeee3830e68fdfb1349771173c020b Oct 06 08:29:02 crc kubenswrapper[4769]: I1006 08:29:02.180007 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" path="/var/lib/kubelet/pods/85c4cc42-b3f2-4722-aede-2dbe9b4df3a6/volumes" Oct 06 08:29:02 crc kubenswrapper[4769]: I1006 08:29:02.710096 4769 generic.go:334] "Generic (PLEG): container finished" podID="4e294b71-a9ad-4c32-b4cb-4eb73c886467" containerID="b373055401016348581615272a8c057f340b2d5ec7356e659765ec095ce44efd" exitCode=0 Oct 06 08:29:02 crc kubenswrapper[4769]: I1006 08:29:02.710200 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkkh" event={"ID":"4e294b71-a9ad-4c32-b4cb-4eb73c886467","Type":"ContainerDied","Data":"b373055401016348581615272a8c057f340b2d5ec7356e659765ec095ce44efd"} Oct 06 08:29:02 crc kubenswrapper[4769]: I1006 08:29:02.710254 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkkh" event={"ID":"4e294b71-a9ad-4c32-b4cb-4eb73c886467","Type":"ContainerStarted","Data":"d39b86421d0d4501185c14d77a5cf2e083aeee3830e68fdfb1349771173c020b"} Oct 06 08:29:07 crc kubenswrapper[4769]: I1006 08:29:07.166296 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:29:07 crc kubenswrapper[4769]: E1006 08:29:07.167064 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:29:19 crc kubenswrapper[4769]: I1006 08:29:19.166612 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:29:19 crc kubenswrapper[4769]: E1006 08:29:19.167384 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:29:24 crc kubenswrapper[4769]: I1006 08:29:24.893663 4769 generic.go:334] "Generic (PLEG): container finished" podID="4e294b71-a9ad-4c32-b4cb-4eb73c886467" containerID="b9801437a0aae99b0f5d28d25699c0062b03a7d723cf8c16a1fb6f387d410331" exitCode=0 Oct 06 08:29:24 crc kubenswrapper[4769]: I1006 08:29:24.893751 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkkh" event={"ID":"4e294b71-a9ad-4c32-b4cb-4eb73c886467","Type":"ContainerDied","Data":"b9801437a0aae99b0f5d28d25699c0062b03a7d723cf8c16a1fb6f387d410331"} Oct 06 08:29:25 crc kubenswrapper[4769]: I1006 08:29:25.904309 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkkh" event={"ID":"4e294b71-a9ad-4c32-b4cb-4eb73c886467","Type":"ContainerStarted","Data":"495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555"} Oct 06 08:29:25 crc kubenswrapper[4769]: I1006 08:29:25.921892 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4vkkh" podStartSLOduration=2.234198504 podStartE2EDuration="24.921874201s" podCreationTimestamp="2025-10-06 08:29:01 +0000 UTC" firstStartedPulling="2025-10-06 08:29:02.712249384 +0000 UTC m=+4339.236530531" lastFinishedPulling="2025-10-06 08:29:25.399925081 +0000 UTC m=+4361.924206228" observedRunningTime="2025-10-06 08:29:25.919476935 +0000 UTC m=+4362.443758082" watchObservedRunningTime="2025-10-06 08:29:25.921874201 +0000 UTC m=+4362.446155348" Oct 06 08:29:31 crc kubenswrapper[4769]: I1006 08:29:31.166047 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:29:31 crc kubenswrapper[4769]: E1006 08:29:31.167033 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:29:31 crc kubenswrapper[4769]: I1006 08:29:31.530916 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:31 crc kubenswrapper[4769]: I1006 08:29:31.530970 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:31 crc kubenswrapper[4769]: I1006 08:29:31.583404 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:32 crc kubenswrapper[4769]: I1006 08:29:32.000766 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:32 crc kubenswrapper[4769]: I1006 08:29:32.403825 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkkh"] Oct 06 08:29:33 crc kubenswrapper[4769]: I1006 08:29:33.969701 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4vkkh" podUID="4e294b71-a9ad-4c32-b4cb-4eb73c886467" containerName="registry-server" containerID="cri-o://495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555" gracePeriod=2 Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.494963 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.677042 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e294b71-a9ad-4c32-b4cb-4eb73c886467-catalog-content\") pod \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\" (UID: \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\") " Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.677174 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbg48\" (UniqueName: \"kubernetes.io/projected/4e294b71-a9ad-4c32-b4cb-4eb73c886467-kube-api-access-qbg48\") pod \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\" (UID: \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\") " Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.677325 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e294b71-a9ad-4c32-b4cb-4eb73c886467-utilities\") pod \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\" (UID: \"4e294b71-a9ad-4c32-b4cb-4eb73c886467\") " Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.678739 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e294b71-a9ad-4c32-b4cb-4eb73c886467-utilities" (OuterVolumeSpecName: "utilities") pod "4e294b71-a9ad-4c32-b4cb-4eb73c886467" (UID: "4e294b71-a9ad-4c32-b4cb-4eb73c886467"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.691462 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e294b71-a9ad-4c32-b4cb-4eb73c886467-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e294b71-a9ad-4c32-b4cb-4eb73c886467" (UID: "4e294b71-a9ad-4c32-b4cb-4eb73c886467"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.693202 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e294b71-a9ad-4c32-b4cb-4eb73c886467-kube-api-access-qbg48" (OuterVolumeSpecName: "kube-api-access-qbg48") pod "4e294b71-a9ad-4c32-b4cb-4eb73c886467" (UID: "4e294b71-a9ad-4c32-b4cb-4eb73c886467"). InnerVolumeSpecName "kube-api-access-qbg48". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.779758 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbg48\" (UniqueName: \"kubernetes.io/projected/4e294b71-a9ad-4c32-b4cb-4eb73c886467-kube-api-access-qbg48\") on node \"crc\" DevicePath \"\"" Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.779795 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e294b71-a9ad-4c32-b4cb-4eb73c886467-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.779808 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e294b71-a9ad-4c32-b4cb-4eb73c886467-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.980094 4769 generic.go:334] "Generic (PLEG): container finished" podID="4e294b71-a9ad-4c32-b4cb-4eb73c886467" containerID="495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555" exitCode=0 Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.980141 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkkh" event={"ID":"4e294b71-a9ad-4c32-b4cb-4eb73c886467","Type":"ContainerDied","Data":"495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555"} Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.980150 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vkkh" Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.980171 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vkkh" event={"ID":"4e294b71-a9ad-4c32-b4cb-4eb73c886467","Type":"ContainerDied","Data":"d39b86421d0d4501185c14d77a5cf2e083aeee3830e68fdfb1349771173c020b"} Oct 06 08:29:34 crc kubenswrapper[4769]: I1006 08:29:34.980191 4769 scope.go:117] "RemoveContainer" containerID="495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555" Oct 06 08:29:35 crc kubenswrapper[4769]: I1006 08:29:35.002093 4769 scope.go:117] "RemoveContainer" containerID="b9801437a0aae99b0f5d28d25699c0062b03a7d723cf8c16a1fb6f387d410331" Oct 06 08:29:35 crc kubenswrapper[4769]: I1006 08:29:35.010662 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkkh"] Oct 06 08:29:35 crc kubenswrapper[4769]: I1006 08:29:35.019283 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vkkh"] Oct 06 08:29:35 crc kubenswrapper[4769]: I1006 08:29:35.048977 4769 scope.go:117] "RemoveContainer" containerID="b373055401016348581615272a8c057f340b2d5ec7356e659765ec095ce44efd" Oct 06 08:29:35 crc kubenswrapper[4769]: I1006 08:29:35.070114 4769 scope.go:117] "RemoveContainer" containerID="495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555" Oct 06 08:29:35 crc kubenswrapper[4769]: E1006 08:29:35.070582 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555\": container with ID starting with 495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555 not found: ID does not exist" containerID="495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555" Oct 06 08:29:35 crc kubenswrapper[4769]: I1006 08:29:35.070614 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555"} err="failed to get container status \"495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555\": rpc error: code = NotFound desc = could not find container \"495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555\": container with ID starting with 495a6c4059522740e1e4947412a03dc203d4bb47201486091fa6df5c135f0555 not found: ID does not exist" Oct 06 08:29:35 crc kubenswrapper[4769]: I1006 08:29:35.070633 4769 scope.go:117] "RemoveContainer" containerID="b9801437a0aae99b0f5d28d25699c0062b03a7d723cf8c16a1fb6f387d410331" Oct 06 08:29:35 crc kubenswrapper[4769]: E1006 08:29:35.070981 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9801437a0aae99b0f5d28d25699c0062b03a7d723cf8c16a1fb6f387d410331\": container with ID starting with b9801437a0aae99b0f5d28d25699c0062b03a7d723cf8c16a1fb6f387d410331 not found: ID does not exist" containerID="b9801437a0aae99b0f5d28d25699c0062b03a7d723cf8c16a1fb6f387d410331" Oct 06 08:29:35 crc kubenswrapper[4769]: I1006 08:29:35.071005 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9801437a0aae99b0f5d28d25699c0062b03a7d723cf8c16a1fb6f387d410331"} err="failed to get container status \"b9801437a0aae99b0f5d28d25699c0062b03a7d723cf8c16a1fb6f387d410331\": rpc error: code = NotFound desc = could not find container \"b9801437a0aae99b0f5d28d25699c0062b03a7d723cf8c16a1fb6f387d410331\": container with ID starting with b9801437a0aae99b0f5d28d25699c0062b03a7d723cf8c16a1fb6f387d410331 not found: ID does not exist" Oct 06 08:29:35 crc kubenswrapper[4769]: I1006 08:29:35.071022 4769 scope.go:117] "RemoveContainer" containerID="b373055401016348581615272a8c057f340b2d5ec7356e659765ec095ce44efd" Oct 06 08:29:35 crc kubenswrapper[4769]: E1006 08:29:35.071402 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b373055401016348581615272a8c057f340b2d5ec7356e659765ec095ce44efd\": container with ID starting with b373055401016348581615272a8c057f340b2d5ec7356e659765ec095ce44efd not found: ID does not exist" containerID="b373055401016348581615272a8c057f340b2d5ec7356e659765ec095ce44efd" Oct 06 08:29:35 crc kubenswrapper[4769]: I1006 08:29:35.071448 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b373055401016348581615272a8c057f340b2d5ec7356e659765ec095ce44efd"} err="failed to get container status \"b373055401016348581615272a8c057f340b2d5ec7356e659765ec095ce44efd\": rpc error: code = NotFound desc = could not find container \"b373055401016348581615272a8c057f340b2d5ec7356e659765ec095ce44efd\": container with ID starting with b373055401016348581615272a8c057f340b2d5ec7356e659765ec095ce44efd not found: ID does not exist" Oct 06 08:29:36 crc kubenswrapper[4769]: I1006 08:29:36.176693 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e294b71-a9ad-4c32-b4cb-4eb73c886467" path="/var/lib/kubelet/pods/4e294b71-a9ad-4c32-b4cb-4eb73c886467/volumes" Oct 06 08:29:45 crc kubenswrapper[4769]: I1006 08:29:45.166472 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:29:45 crc kubenswrapper[4769]: E1006 08:29:45.167246 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:29:58 crc kubenswrapper[4769]: I1006 08:29:58.166817 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:29:58 crc kubenswrapper[4769]: E1006 08:29:58.168776 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.191211 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92"] Oct 06 08:30:00 crc kubenswrapper[4769]: E1006 08:30:00.192078 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" containerName="extract-utilities" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.192098 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" containerName="extract-utilities" Oct 06 08:30:00 crc kubenswrapper[4769]: E1006 08:30:00.192118 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" containerName="extract-content" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.192128 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" containerName="extract-content" Oct 06 08:30:00 crc kubenswrapper[4769]: E1006 08:30:00.192144 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e294b71-a9ad-4c32-b4cb-4eb73c886467" containerName="extract-utilities" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.192152 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e294b71-a9ad-4c32-b4cb-4eb73c886467" containerName="extract-utilities" Oct 06 08:30:00 crc kubenswrapper[4769]: E1006 08:30:00.192168 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" containerName="registry-server" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.192175 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" containerName="registry-server" Oct 06 08:30:00 crc kubenswrapper[4769]: E1006 08:30:00.192199 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e294b71-a9ad-4c32-b4cb-4eb73c886467" containerName="extract-content" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.192205 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e294b71-a9ad-4c32-b4cb-4eb73c886467" containerName="extract-content" Oct 06 08:30:00 crc kubenswrapper[4769]: E1006 08:30:00.192233 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e294b71-a9ad-4c32-b4cb-4eb73c886467" containerName="registry-server" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.192239 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e294b71-a9ad-4c32-b4cb-4eb73c886467" containerName="registry-server" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.192481 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c4cc42-b3f2-4722-aede-2dbe9b4df3a6" containerName="registry-server" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.192501 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e294b71-a9ad-4c32-b4cb-4eb73c886467" containerName="registry-server" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.193093 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92"] Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.193275 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.197502 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.205166 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.358097 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96mk8\" (UniqueName: \"kubernetes.io/projected/9ac131b9-5abb-4480-817b-fc1a4befc8bc-kube-api-access-96mk8\") pod \"collect-profiles-29328990-9gb92\" (UID: \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.358226 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ac131b9-5abb-4480-817b-fc1a4befc8bc-secret-volume\") pod \"collect-profiles-29328990-9gb92\" (UID: \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.358528 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ac131b9-5abb-4480-817b-fc1a4befc8bc-config-volume\") pod \"collect-profiles-29328990-9gb92\" (UID: \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.460840 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96mk8\" (UniqueName: \"kubernetes.io/projected/9ac131b9-5abb-4480-817b-fc1a4befc8bc-kube-api-access-96mk8\") pod \"collect-profiles-29328990-9gb92\" (UID: \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.460968 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ac131b9-5abb-4480-817b-fc1a4befc8bc-secret-volume\") pod \"collect-profiles-29328990-9gb92\" (UID: \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.461170 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ac131b9-5abb-4480-817b-fc1a4befc8bc-config-volume\") pod \"collect-profiles-29328990-9gb92\" (UID: \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.462358 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ac131b9-5abb-4480-817b-fc1a4befc8bc-config-volume\") pod \"collect-profiles-29328990-9gb92\" (UID: \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.473897 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ac131b9-5abb-4480-817b-fc1a4befc8bc-secret-volume\") pod \"collect-profiles-29328990-9gb92\" (UID: \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.477943 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96mk8\" (UniqueName: \"kubernetes.io/projected/9ac131b9-5abb-4480-817b-fc1a4befc8bc-kube-api-access-96mk8\") pod \"collect-profiles-29328990-9gb92\" (UID: \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:00 crc kubenswrapper[4769]: I1006 08:30:00.532013 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:01 crc kubenswrapper[4769]: I1006 08:30:01.033411 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92"] Oct 06 08:30:01 crc kubenswrapper[4769]: I1006 08:30:01.217334 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" event={"ID":"9ac131b9-5abb-4480-817b-fc1a4befc8bc","Type":"ContainerStarted","Data":"c8c9c1fa560e231fa31bd030a935daaf7d0ed778f4c8b68382f6f940ecd7afb2"} Oct 06 08:30:02 crc kubenswrapper[4769]: I1006 08:30:02.229001 4769 generic.go:334] "Generic (PLEG): container finished" podID="9ac131b9-5abb-4480-817b-fc1a4befc8bc" containerID="c3dea704864805c4b542112a7e545dcfdf496deb8cb625892b8d216278ef97c5" exitCode=0 Oct 06 08:30:02 crc kubenswrapper[4769]: I1006 08:30:02.229122 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" event={"ID":"9ac131b9-5abb-4480-817b-fc1a4befc8bc","Type":"ContainerDied","Data":"c3dea704864805c4b542112a7e545dcfdf496deb8cb625892b8d216278ef97c5"} Oct 06 08:30:03 crc kubenswrapper[4769]: I1006 08:30:03.577350 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:03 crc kubenswrapper[4769]: I1006 08:30:03.735162 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ac131b9-5abb-4480-817b-fc1a4befc8bc-config-volume\") pod \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\" (UID: \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\") " Oct 06 08:30:03 crc kubenswrapper[4769]: I1006 08:30:03.735291 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96mk8\" (UniqueName: \"kubernetes.io/projected/9ac131b9-5abb-4480-817b-fc1a4befc8bc-kube-api-access-96mk8\") pod \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\" (UID: \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\") " Oct 06 08:30:03 crc kubenswrapper[4769]: I1006 08:30:03.735512 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ac131b9-5abb-4480-817b-fc1a4befc8bc-secret-volume\") pod \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\" (UID: \"9ac131b9-5abb-4480-817b-fc1a4befc8bc\") " Oct 06 08:30:03 crc kubenswrapper[4769]: I1006 08:30:03.744124 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ac131b9-5abb-4480-817b-fc1a4befc8bc-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ac131b9-5abb-4480-817b-fc1a4befc8bc" (UID: "9ac131b9-5abb-4480-817b-fc1a4befc8bc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 08:30:03 crc kubenswrapper[4769]: I1006 08:30:03.750522 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac131b9-5abb-4480-817b-fc1a4befc8bc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ac131b9-5abb-4480-817b-fc1a4befc8bc" (UID: "9ac131b9-5abb-4480-817b-fc1a4befc8bc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 08:30:03 crc kubenswrapper[4769]: I1006 08:30:03.756176 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac131b9-5abb-4480-817b-fc1a4befc8bc-kube-api-access-96mk8" (OuterVolumeSpecName: "kube-api-access-96mk8") pod "9ac131b9-5abb-4480-817b-fc1a4befc8bc" (UID: "9ac131b9-5abb-4480-817b-fc1a4befc8bc"). InnerVolumeSpecName "kube-api-access-96mk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:30:03 crc kubenswrapper[4769]: I1006 08:30:03.838318 4769 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ac131b9-5abb-4480-817b-fc1a4befc8bc-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 08:30:03 crc kubenswrapper[4769]: I1006 08:30:03.838356 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ac131b9-5abb-4480-817b-fc1a4befc8bc-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 08:30:03 crc kubenswrapper[4769]: I1006 08:30:03.838366 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96mk8\" (UniqueName: \"kubernetes.io/projected/9ac131b9-5abb-4480-817b-fc1a4befc8bc-kube-api-access-96mk8\") on node \"crc\" DevicePath \"\"" Oct 06 08:30:04 crc kubenswrapper[4769]: I1006 08:30:04.246451 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" event={"ID":"9ac131b9-5abb-4480-817b-fc1a4befc8bc","Type":"ContainerDied","Data":"c8c9c1fa560e231fa31bd030a935daaf7d0ed778f4c8b68382f6f940ecd7afb2"} Oct 06 08:30:04 crc kubenswrapper[4769]: I1006 08:30:04.246804 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8c9c1fa560e231fa31bd030a935daaf7d0ed778f4c8b68382f6f940ecd7afb2" Oct 06 08:30:04 crc kubenswrapper[4769]: I1006 08:30:04.246484 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29328990-9gb92" Oct 06 08:30:04 crc kubenswrapper[4769]: I1006 08:30:04.647466 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5"] Oct 06 08:30:04 crc kubenswrapper[4769]: I1006 08:30:04.654637 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328945-v5xt5"] Oct 06 08:30:06 crc kubenswrapper[4769]: I1006 08:30:06.178222 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a49fa2d-6794-4f68-b6d2-a426cfc5724c" path="/var/lib/kubelet/pods/3a49fa2d-6794-4f68-b6d2-a426cfc5724c/volumes" Oct 06 08:30:10 crc kubenswrapper[4769]: I1006 08:30:10.166006 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:30:10 crc kubenswrapper[4769]: E1006 08:30:10.166815 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:30:23 crc kubenswrapper[4769]: I1006 08:30:23.167574 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:30:23 crc kubenswrapper[4769]: E1006 08:30:23.168634 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:30:35 crc kubenswrapper[4769]: I1006 08:30:35.166828 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:30:35 crc kubenswrapper[4769]: E1006 08:30:35.167507 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:30:49 crc kubenswrapper[4769]: I1006 08:30:49.170767 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:30:49 crc kubenswrapper[4769]: E1006 08:30:49.171830 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:30:57 crc kubenswrapper[4769]: I1006 08:30:57.949118 4769 scope.go:117] "RemoveContainer" containerID="58ef2d37c3345429e167383f739bd501f6720fcbff31c460a45aacf5ef61f319" Oct 06 08:31:02 crc kubenswrapper[4769]: I1006 08:31:02.167215 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:31:02 crc kubenswrapper[4769]: E1006 08:31:02.168042 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:31:16 crc kubenswrapper[4769]: I1006 08:31:16.165744 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:31:16 crc kubenswrapper[4769]: E1006 08:31:16.167841 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:31:28 crc kubenswrapper[4769]: I1006 08:31:28.166554 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:31:28 crc kubenswrapper[4769]: E1006 08:31:28.167413 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:31:42 crc kubenswrapper[4769]: I1006 08:31:42.166534 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:31:42 crc kubenswrapper[4769]: E1006 08:31:42.167730 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:31:57 crc kubenswrapper[4769]: I1006 08:31:57.166914 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:31:57 crc kubenswrapper[4769]: E1006 08:31:57.167597 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:32:08 crc kubenswrapper[4769]: I1006 08:32:08.166039 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:32:08 crc kubenswrapper[4769]: E1006 08:32:08.167325 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:32:22 crc kubenswrapper[4769]: I1006 08:32:22.166713 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:32:22 crc kubenswrapper[4769]: E1006 08:32:22.167604 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:32:33 crc kubenswrapper[4769]: I1006 08:32:33.165729 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:32:33 crc kubenswrapper[4769]: E1006 08:32:33.166495 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:32:44 crc kubenswrapper[4769]: I1006 08:32:44.172333 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:32:44 crc kubenswrapper[4769]: E1006 08:32:44.173165 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:32:57 crc kubenswrapper[4769]: I1006 08:32:57.166336 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:32:57 crc kubenswrapper[4769]: I1006 08:32:57.798480 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"d674122c44fab7feac37cd3295a437e54256b31dd0aaf5bd9ebe5b93ab3cd80a"} Oct 06 08:35:06 crc kubenswrapper[4769]: I1006 08:35:06.998148 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x8r8z"] Oct 06 08:35:07 crc kubenswrapper[4769]: E1006 08:35:06.999723 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac131b9-5abb-4480-817b-fc1a4befc8bc" containerName="collect-profiles" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:06.999748 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac131b9-5abb-4480-817b-fc1a4befc8bc" containerName="collect-profiles" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.000024 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac131b9-5abb-4480-817b-fc1a4befc8bc" containerName="collect-profiles" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.001839 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.011008 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x8r8z"] Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.057532 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935ed839-d337-4abf-9ad0-92fd352d849d-utilities\") pod \"community-operators-x8r8z\" (UID: \"935ed839-d337-4abf-9ad0-92fd352d849d\") " pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.057662 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwfqz\" (UniqueName: \"kubernetes.io/projected/935ed839-d337-4abf-9ad0-92fd352d849d-kube-api-access-kwfqz\") pod \"community-operators-x8r8z\" (UID: \"935ed839-d337-4abf-9ad0-92fd352d849d\") " pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.057744 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935ed839-d337-4abf-9ad0-92fd352d849d-catalog-content\") pod \"community-operators-x8r8z\" (UID: \"935ed839-d337-4abf-9ad0-92fd352d849d\") " pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.159840 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwfqz\" (UniqueName: \"kubernetes.io/projected/935ed839-d337-4abf-9ad0-92fd352d849d-kube-api-access-kwfqz\") pod \"community-operators-x8r8z\" (UID: \"935ed839-d337-4abf-9ad0-92fd352d849d\") " pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.160242 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935ed839-d337-4abf-9ad0-92fd352d849d-catalog-content\") pod \"community-operators-x8r8z\" (UID: \"935ed839-d337-4abf-9ad0-92fd352d849d\") " pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.160290 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935ed839-d337-4abf-9ad0-92fd352d849d-utilities\") pod \"community-operators-x8r8z\" (UID: \"935ed839-d337-4abf-9ad0-92fd352d849d\") " pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.160704 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935ed839-d337-4abf-9ad0-92fd352d849d-utilities\") pod \"community-operators-x8r8z\" (UID: \"935ed839-d337-4abf-9ad0-92fd352d849d\") " pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.160911 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935ed839-d337-4abf-9ad0-92fd352d849d-catalog-content\") pod \"community-operators-x8r8z\" (UID: \"935ed839-d337-4abf-9ad0-92fd352d849d\") " pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.460247 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwfqz\" (UniqueName: \"kubernetes.io/projected/935ed839-d337-4abf-9ad0-92fd352d849d-kube-api-access-kwfqz\") pod \"community-operators-x8r8z\" (UID: \"935ed839-d337-4abf-9ad0-92fd352d849d\") " pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:07 crc kubenswrapper[4769]: I1006 08:35:07.630408 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:08 crc kubenswrapper[4769]: I1006 08:35:08.078237 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x8r8z"] Oct 06 08:35:08 crc kubenswrapper[4769]: I1006 08:35:08.877993 4769 generic.go:334] "Generic (PLEG): container finished" podID="935ed839-d337-4abf-9ad0-92fd352d849d" containerID="97471a46b793960af22db1a89581ca988d9f70728afa418fca948f3cce5d5406" exitCode=0 Oct 06 08:35:08 crc kubenswrapper[4769]: I1006 08:35:08.878303 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8r8z" event={"ID":"935ed839-d337-4abf-9ad0-92fd352d849d","Type":"ContainerDied","Data":"97471a46b793960af22db1a89581ca988d9f70728afa418fca948f3cce5d5406"} Oct 06 08:35:08 crc kubenswrapper[4769]: I1006 08:35:08.878329 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8r8z" event={"ID":"935ed839-d337-4abf-9ad0-92fd352d849d","Type":"ContainerStarted","Data":"851fdd255db064858668fa7ba664de1137ceccf1815decf18bec068be4477af6"} Oct 06 08:35:08 crc kubenswrapper[4769]: I1006 08:35:08.880267 4769 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 06 08:35:10 crc kubenswrapper[4769]: I1006 08:35:10.908313 4769 generic.go:334] "Generic (PLEG): container finished" podID="935ed839-d337-4abf-9ad0-92fd352d849d" containerID="5f0ff417aa40a98f6e25f1412276c6e2995e128fbe07e537001f59216309829b" exitCode=0 Oct 06 08:35:10 crc kubenswrapper[4769]: I1006 08:35:10.908623 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8r8z" event={"ID":"935ed839-d337-4abf-9ad0-92fd352d849d","Type":"ContainerDied","Data":"5f0ff417aa40a98f6e25f1412276c6e2995e128fbe07e537001f59216309829b"} Oct 06 08:35:11 crc kubenswrapper[4769]: I1006 08:35:11.917576 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8r8z" event={"ID":"935ed839-d337-4abf-9ad0-92fd352d849d","Type":"ContainerStarted","Data":"079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87"} Oct 06 08:35:11 crc kubenswrapper[4769]: I1006 08:35:11.935944 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x8r8z" podStartSLOduration=3.462105701 podStartE2EDuration="5.93592691s" podCreationTimestamp="2025-10-06 08:35:06 +0000 UTC" firstStartedPulling="2025-10-06 08:35:08.880068228 +0000 UTC m=+4705.404349365" lastFinishedPulling="2025-10-06 08:35:11.353889427 +0000 UTC m=+4707.878170574" observedRunningTime="2025-10-06 08:35:11.935047066 +0000 UTC m=+4708.459328213" watchObservedRunningTime="2025-10-06 08:35:11.93592691 +0000 UTC m=+4708.460208057" Oct 06 08:35:17 crc kubenswrapper[4769]: I1006 08:35:17.630819 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:17 crc kubenswrapper[4769]: I1006 08:35:17.631934 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:17 crc kubenswrapper[4769]: I1006 08:35:17.688768 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:18 crc kubenswrapper[4769]: I1006 08:35:18.014605 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:18 crc kubenswrapper[4769]: I1006 08:35:18.069529 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x8r8z"] Oct 06 08:35:19 crc kubenswrapper[4769]: I1006 08:35:19.984842 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x8r8z" podUID="935ed839-d337-4abf-9ad0-92fd352d849d" containerName="registry-server" containerID="cri-o://079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87" gracePeriod=2 Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.460518 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.612755 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935ed839-d337-4abf-9ad0-92fd352d849d-utilities\") pod \"935ed839-d337-4abf-9ad0-92fd352d849d\" (UID: \"935ed839-d337-4abf-9ad0-92fd352d849d\") " Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.612880 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwfqz\" (UniqueName: \"kubernetes.io/projected/935ed839-d337-4abf-9ad0-92fd352d849d-kube-api-access-kwfqz\") pod \"935ed839-d337-4abf-9ad0-92fd352d849d\" (UID: \"935ed839-d337-4abf-9ad0-92fd352d849d\") " Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.613168 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935ed839-d337-4abf-9ad0-92fd352d849d-catalog-content\") pod \"935ed839-d337-4abf-9ad0-92fd352d849d\" (UID: \"935ed839-d337-4abf-9ad0-92fd352d849d\") " Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.614194 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/935ed839-d337-4abf-9ad0-92fd352d849d-utilities" (OuterVolumeSpecName: "utilities") pod "935ed839-d337-4abf-9ad0-92fd352d849d" (UID: "935ed839-d337-4abf-9ad0-92fd352d849d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.616313 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/935ed839-d337-4abf-9ad0-92fd352d849d-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.622119 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/935ed839-d337-4abf-9ad0-92fd352d849d-kube-api-access-kwfqz" (OuterVolumeSpecName: "kube-api-access-kwfqz") pod "935ed839-d337-4abf-9ad0-92fd352d849d" (UID: "935ed839-d337-4abf-9ad0-92fd352d849d"). InnerVolumeSpecName "kube-api-access-kwfqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.666166 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/935ed839-d337-4abf-9ad0-92fd352d849d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "935ed839-d337-4abf-9ad0-92fd352d849d" (UID: "935ed839-d337-4abf-9ad0-92fd352d849d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.717734 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/935ed839-d337-4abf-9ad0-92fd352d849d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.717793 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwfqz\" (UniqueName: \"kubernetes.io/projected/935ed839-d337-4abf-9ad0-92fd352d849d-kube-api-access-kwfqz\") on node \"crc\" DevicePath \"\"" Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.995472 4769 generic.go:334] "Generic (PLEG): container finished" podID="935ed839-d337-4abf-9ad0-92fd352d849d" containerID="079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87" exitCode=0 Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.995520 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8r8z" event={"ID":"935ed839-d337-4abf-9ad0-92fd352d849d","Type":"ContainerDied","Data":"079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87"} Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.995552 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8r8z" event={"ID":"935ed839-d337-4abf-9ad0-92fd352d849d","Type":"ContainerDied","Data":"851fdd255db064858668fa7ba664de1137ceccf1815decf18bec068be4477af6"} Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.995569 4769 scope.go:117] "RemoveContainer" containerID="079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87" Oct 06 08:35:20 crc kubenswrapper[4769]: I1006 08:35:20.995569 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8r8z" Oct 06 08:35:21 crc kubenswrapper[4769]: I1006 08:35:21.017711 4769 scope.go:117] "RemoveContainer" containerID="5f0ff417aa40a98f6e25f1412276c6e2995e128fbe07e537001f59216309829b" Oct 06 08:35:21 crc kubenswrapper[4769]: I1006 08:35:21.027308 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x8r8z"] Oct 06 08:35:21 crc kubenswrapper[4769]: I1006 08:35:21.036052 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x8r8z"] Oct 06 08:35:21 crc kubenswrapper[4769]: I1006 08:35:21.052054 4769 scope.go:117] "RemoveContainer" containerID="97471a46b793960af22db1a89581ca988d9f70728afa418fca948f3cce5d5406" Oct 06 08:35:21 crc kubenswrapper[4769]: I1006 08:35:21.079952 4769 scope.go:117] "RemoveContainer" containerID="079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87" Oct 06 08:35:21 crc kubenswrapper[4769]: E1006 08:35:21.080482 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87\": container with ID starting with 079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87 not found: ID does not exist" containerID="079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87" Oct 06 08:35:21 crc kubenswrapper[4769]: I1006 08:35:21.080525 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87"} err="failed to get container status \"079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87\": rpc error: code = NotFound desc = could not find container \"079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87\": container with ID starting with 079e547c3e8467fb79054aec0324628e6a6379f1b59d005711a77ac32f396d87 not found: ID does not exist" Oct 06 08:35:21 crc kubenswrapper[4769]: I1006 08:35:21.080562 4769 scope.go:117] "RemoveContainer" containerID="5f0ff417aa40a98f6e25f1412276c6e2995e128fbe07e537001f59216309829b" Oct 06 08:35:21 crc kubenswrapper[4769]: E1006 08:35:21.080880 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f0ff417aa40a98f6e25f1412276c6e2995e128fbe07e537001f59216309829b\": container with ID starting with 5f0ff417aa40a98f6e25f1412276c6e2995e128fbe07e537001f59216309829b not found: ID does not exist" containerID="5f0ff417aa40a98f6e25f1412276c6e2995e128fbe07e537001f59216309829b" Oct 06 08:35:21 crc kubenswrapper[4769]: I1006 08:35:21.080913 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f0ff417aa40a98f6e25f1412276c6e2995e128fbe07e537001f59216309829b"} err="failed to get container status \"5f0ff417aa40a98f6e25f1412276c6e2995e128fbe07e537001f59216309829b\": rpc error: code = NotFound desc = could not find container \"5f0ff417aa40a98f6e25f1412276c6e2995e128fbe07e537001f59216309829b\": container with ID starting with 5f0ff417aa40a98f6e25f1412276c6e2995e128fbe07e537001f59216309829b not found: ID does not exist" Oct 06 08:35:21 crc kubenswrapper[4769]: I1006 08:35:21.080927 4769 scope.go:117] "RemoveContainer" containerID="97471a46b793960af22db1a89581ca988d9f70728afa418fca948f3cce5d5406" Oct 06 08:35:21 crc kubenswrapper[4769]: E1006 08:35:21.081135 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97471a46b793960af22db1a89581ca988d9f70728afa418fca948f3cce5d5406\": container with ID starting with 97471a46b793960af22db1a89581ca988d9f70728afa418fca948f3cce5d5406 not found: ID does not exist" containerID="97471a46b793960af22db1a89581ca988d9f70728afa418fca948f3cce5d5406" Oct 06 08:35:21 crc kubenswrapper[4769]: I1006 08:35:21.081153 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97471a46b793960af22db1a89581ca988d9f70728afa418fca948f3cce5d5406"} err="failed to get container status \"97471a46b793960af22db1a89581ca988d9f70728afa418fca948f3cce5d5406\": rpc error: code = NotFound desc = could not find container \"97471a46b793960af22db1a89581ca988d9f70728afa418fca948f3cce5d5406\": container with ID starting with 97471a46b793960af22db1a89581ca988d9f70728afa418fca948f3cce5d5406 not found: ID does not exist" Oct 06 08:35:22 crc kubenswrapper[4769]: I1006 08:35:22.178509 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="935ed839-d337-4abf-9ad0-92fd352d849d" path="/var/lib/kubelet/pods/935ed839-d337-4abf-9ad0-92fd352d849d/volumes" Oct 06 08:35:22 crc kubenswrapper[4769]: I1006 08:35:22.245239 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:35:22 crc kubenswrapper[4769]: I1006 08:35:22.245310 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:35:52 crc kubenswrapper[4769]: I1006 08:35:52.245591 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:35:52 crc kubenswrapper[4769]: I1006 08:35:52.246713 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:36:22 crc kubenswrapper[4769]: I1006 08:36:22.246098 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:36:22 crc kubenswrapper[4769]: I1006 08:36:22.247233 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:36:22 crc kubenswrapper[4769]: I1006 08:36:22.247320 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 08:36:22 crc kubenswrapper[4769]: I1006 08:36:22.248803 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d674122c44fab7feac37cd3295a437e54256b31dd0aaf5bd9ebe5b93ab3cd80a"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 08:36:22 crc kubenswrapper[4769]: I1006 08:36:22.248863 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://d674122c44fab7feac37cd3295a437e54256b31dd0aaf5bd9ebe5b93ab3cd80a" gracePeriod=600 Oct 06 08:36:22 crc kubenswrapper[4769]: I1006 08:36:22.512725 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="d674122c44fab7feac37cd3295a437e54256b31dd0aaf5bd9ebe5b93ab3cd80a" exitCode=0 Oct 06 08:36:22 crc kubenswrapper[4769]: I1006 08:36:22.513272 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"d674122c44fab7feac37cd3295a437e54256b31dd0aaf5bd9ebe5b93ab3cd80a"} Oct 06 08:36:22 crc kubenswrapper[4769]: I1006 08:36:22.513343 4769 scope.go:117] "RemoveContainer" containerID="7f9e6a04d676470f4046edb471509774720f425b72fdc2e6dc3bc269877c8609" Oct 06 08:36:23 crc kubenswrapper[4769]: I1006 08:36:23.524271 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912"} Oct 06 08:36:36 crc kubenswrapper[4769]: I1006 08:36:36.729875 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vt2ln"] Oct 06 08:36:36 crc kubenswrapper[4769]: E1006 08:36:36.731768 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935ed839-d337-4abf-9ad0-92fd352d849d" containerName="extract-content" Oct 06 08:36:36 crc kubenswrapper[4769]: I1006 08:36:36.731783 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="935ed839-d337-4abf-9ad0-92fd352d849d" containerName="extract-content" Oct 06 08:36:36 crc kubenswrapper[4769]: E1006 08:36:36.731811 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935ed839-d337-4abf-9ad0-92fd352d849d" containerName="registry-server" Oct 06 08:36:36 crc kubenswrapper[4769]: I1006 08:36:36.731817 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="935ed839-d337-4abf-9ad0-92fd352d849d" containerName="registry-server" Oct 06 08:36:36 crc kubenswrapper[4769]: E1006 08:36:36.731830 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935ed839-d337-4abf-9ad0-92fd352d849d" containerName="extract-utilities" Oct 06 08:36:36 crc kubenswrapper[4769]: I1006 08:36:36.731836 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="935ed839-d337-4abf-9ad0-92fd352d849d" containerName="extract-utilities" Oct 06 08:36:36 crc kubenswrapper[4769]: I1006 08:36:36.732015 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="935ed839-d337-4abf-9ad0-92fd352d849d" containerName="registry-server" Oct 06 08:36:36 crc kubenswrapper[4769]: I1006 08:36:36.744760 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vt2ln"] Oct 06 08:36:36 crc kubenswrapper[4769]: I1006 08:36:36.744885 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:36 crc kubenswrapper[4769]: I1006 08:36:36.916022 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c069f6b3-fe41-4676-b4e8-56256837650a-catalog-content\") pod \"redhat-operators-vt2ln\" (UID: \"c069f6b3-fe41-4676-b4e8-56256837650a\") " pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:36 crc kubenswrapper[4769]: I1006 08:36:36.916085 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzdwg\" (UniqueName: \"kubernetes.io/projected/c069f6b3-fe41-4676-b4e8-56256837650a-kube-api-access-xzdwg\") pod \"redhat-operators-vt2ln\" (UID: \"c069f6b3-fe41-4676-b4e8-56256837650a\") " pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:36 crc kubenswrapper[4769]: I1006 08:36:36.916269 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c069f6b3-fe41-4676-b4e8-56256837650a-utilities\") pod \"redhat-operators-vt2ln\" (UID: \"c069f6b3-fe41-4676-b4e8-56256837650a\") " pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:37 crc kubenswrapper[4769]: I1006 08:36:37.017829 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c069f6b3-fe41-4676-b4e8-56256837650a-catalog-content\") pod \"redhat-operators-vt2ln\" (UID: \"c069f6b3-fe41-4676-b4e8-56256837650a\") " pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:37 crc kubenswrapper[4769]: I1006 08:36:37.017891 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzdwg\" (UniqueName: \"kubernetes.io/projected/c069f6b3-fe41-4676-b4e8-56256837650a-kube-api-access-xzdwg\") pod \"redhat-operators-vt2ln\" (UID: \"c069f6b3-fe41-4676-b4e8-56256837650a\") " pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:37 crc kubenswrapper[4769]: I1006 08:36:37.017947 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c069f6b3-fe41-4676-b4e8-56256837650a-utilities\") pod \"redhat-operators-vt2ln\" (UID: \"c069f6b3-fe41-4676-b4e8-56256837650a\") " pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:37 crc kubenswrapper[4769]: I1006 08:36:37.018498 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c069f6b3-fe41-4676-b4e8-56256837650a-utilities\") pod \"redhat-operators-vt2ln\" (UID: \"c069f6b3-fe41-4676-b4e8-56256837650a\") " pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:37 crc kubenswrapper[4769]: I1006 08:36:37.018508 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c069f6b3-fe41-4676-b4e8-56256837650a-catalog-content\") pod \"redhat-operators-vt2ln\" (UID: \"c069f6b3-fe41-4676-b4e8-56256837650a\") " pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:37 crc kubenswrapper[4769]: I1006 08:36:37.046554 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzdwg\" (UniqueName: \"kubernetes.io/projected/c069f6b3-fe41-4676-b4e8-56256837650a-kube-api-access-xzdwg\") pod \"redhat-operators-vt2ln\" (UID: \"c069f6b3-fe41-4676-b4e8-56256837650a\") " pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:37 crc kubenswrapper[4769]: I1006 08:36:37.070925 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:37 crc kubenswrapper[4769]: I1006 08:36:37.595664 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vt2ln"] Oct 06 08:36:37 crc kubenswrapper[4769]: I1006 08:36:37.633800 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vt2ln" event={"ID":"c069f6b3-fe41-4676-b4e8-56256837650a","Type":"ContainerStarted","Data":"9670dcaf83cc26c6f15a01ae19215a9f2dab2dd13172adde78e4efa77b9f8a5c"} Oct 06 08:36:38 crc kubenswrapper[4769]: I1006 08:36:38.644205 4769 generic.go:334] "Generic (PLEG): container finished" podID="c069f6b3-fe41-4676-b4e8-56256837650a" containerID="a944f1510e093512d9ce652d8344d9c4bcf2a8da0ff5318d2e11fa326845cb5e" exitCode=0 Oct 06 08:36:38 crc kubenswrapper[4769]: I1006 08:36:38.644328 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vt2ln" event={"ID":"c069f6b3-fe41-4676-b4e8-56256837650a","Type":"ContainerDied","Data":"a944f1510e093512d9ce652d8344d9c4bcf2a8da0ff5318d2e11fa326845cb5e"} Oct 06 08:36:41 crc kubenswrapper[4769]: I1006 08:36:41.668829 4769 generic.go:334] "Generic (PLEG): container finished" podID="c069f6b3-fe41-4676-b4e8-56256837650a" containerID="5f6a9bad1eb365dd9b60be89a1cb6f6d2dc218da242e431acbf4ca86d4abfe29" exitCode=0 Oct 06 08:36:41 crc kubenswrapper[4769]: I1006 08:36:41.668895 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vt2ln" event={"ID":"c069f6b3-fe41-4676-b4e8-56256837650a","Type":"ContainerDied","Data":"5f6a9bad1eb365dd9b60be89a1cb6f6d2dc218da242e431acbf4ca86d4abfe29"} Oct 06 08:36:42 crc kubenswrapper[4769]: I1006 08:36:42.680875 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vt2ln" event={"ID":"c069f6b3-fe41-4676-b4e8-56256837650a","Type":"ContainerStarted","Data":"bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41"} Oct 06 08:36:42 crc kubenswrapper[4769]: I1006 08:36:42.702847 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vt2ln" podStartSLOduration=3.119558149 podStartE2EDuration="6.702823606s" podCreationTimestamp="2025-10-06 08:36:36 +0000 UTC" firstStartedPulling="2025-10-06 08:36:38.646915846 +0000 UTC m=+4795.171197033" lastFinishedPulling="2025-10-06 08:36:42.230181343 +0000 UTC m=+4798.754462490" observedRunningTime="2025-10-06 08:36:42.696544123 +0000 UTC m=+4799.220825270" watchObservedRunningTime="2025-10-06 08:36:42.702823606 +0000 UTC m=+4799.227104753" Oct 06 08:36:47 crc kubenswrapper[4769]: I1006 08:36:47.071885 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:47 crc kubenswrapper[4769]: I1006 08:36:47.072489 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:47 crc kubenswrapper[4769]: I1006 08:36:47.132553 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:47 crc kubenswrapper[4769]: I1006 08:36:47.894546 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:47 crc kubenswrapper[4769]: I1006 08:36:47.934965 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vt2ln"] Oct 06 08:36:49 crc kubenswrapper[4769]: I1006 08:36:49.739125 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vt2ln" podUID="c069f6b3-fe41-4676-b4e8-56256837650a" containerName="registry-server" containerID="cri-o://bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41" gracePeriod=2 Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.175801 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.268229 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c069f6b3-fe41-4676-b4e8-56256837650a-catalog-content\") pod \"c069f6b3-fe41-4676-b4e8-56256837650a\" (UID: \"c069f6b3-fe41-4676-b4e8-56256837650a\") " Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.268279 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzdwg\" (UniqueName: \"kubernetes.io/projected/c069f6b3-fe41-4676-b4e8-56256837650a-kube-api-access-xzdwg\") pod \"c069f6b3-fe41-4676-b4e8-56256837650a\" (UID: \"c069f6b3-fe41-4676-b4e8-56256837650a\") " Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.268437 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c069f6b3-fe41-4676-b4e8-56256837650a-utilities\") pod \"c069f6b3-fe41-4676-b4e8-56256837650a\" (UID: \"c069f6b3-fe41-4676-b4e8-56256837650a\") " Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.269201 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c069f6b3-fe41-4676-b4e8-56256837650a-utilities" (OuterVolumeSpecName: "utilities") pod "c069f6b3-fe41-4676-b4e8-56256837650a" (UID: "c069f6b3-fe41-4676-b4e8-56256837650a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.273878 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c069f6b3-fe41-4676-b4e8-56256837650a-kube-api-access-xzdwg" (OuterVolumeSpecName: "kube-api-access-xzdwg") pod "c069f6b3-fe41-4676-b4e8-56256837650a" (UID: "c069f6b3-fe41-4676-b4e8-56256837650a"). InnerVolumeSpecName "kube-api-access-xzdwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.360841 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c069f6b3-fe41-4676-b4e8-56256837650a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c069f6b3-fe41-4676-b4e8-56256837650a" (UID: "c069f6b3-fe41-4676-b4e8-56256837650a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.370599 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c069f6b3-fe41-4676-b4e8-56256837650a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.370625 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzdwg\" (UniqueName: \"kubernetes.io/projected/c069f6b3-fe41-4676-b4e8-56256837650a-kube-api-access-xzdwg\") on node \"crc\" DevicePath \"\"" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.370635 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c069f6b3-fe41-4676-b4e8-56256837650a-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.752590 4769 generic.go:334] "Generic (PLEG): container finished" podID="c069f6b3-fe41-4676-b4e8-56256837650a" containerID="bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41" exitCode=0 Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.752659 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vt2ln" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.752688 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vt2ln" event={"ID":"c069f6b3-fe41-4676-b4e8-56256837650a","Type":"ContainerDied","Data":"bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41"} Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.753051 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vt2ln" event={"ID":"c069f6b3-fe41-4676-b4e8-56256837650a","Type":"ContainerDied","Data":"9670dcaf83cc26c6f15a01ae19215a9f2dab2dd13172adde78e4efa77b9f8a5c"} Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.753089 4769 scope.go:117] "RemoveContainer" containerID="bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.778689 4769 scope.go:117] "RemoveContainer" containerID="5f6a9bad1eb365dd9b60be89a1cb6f6d2dc218da242e431acbf4ca86d4abfe29" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.783404 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vt2ln"] Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.802295 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vt2ln"] Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.852599 4769 scope.go:117] "RemoveContainer" containerID="a944f1510e093512d9ce652d8344d9c4bcf2a8da0ff5318d2e11fa326845cb5e" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.881549 4769 scope.go:117] "RemoveContainer" containerID="bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41" Oct 06 08:36:50 crc kubenswrapper[4769]: E1006 08:36:50.883202 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41\": container with ID starting with bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41 not found: ID does not exist" containerID="bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.883266 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41"} err="failed to get container status \"bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41\": rpc error: code = NotFound desc = could not find container \"bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41\": container with ID starting with bf61349d86f06d9af2e6cef041b65bd0d6490a044aa0124e79341f8986663d41 not found: ID does not exist" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.883302 4769 scope.go:117] "RemoveContainer" containerID="5f6a9bad1eb365dd9b60be89a1cb6f6d2dc218da242e431acbf4ca86d4abfe29" Oct 06 08:36:50 crc kubenswrapper[4769]: E1006 08:36:50.883631 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f6a9bad1eb365dd9b60be89a1cb6f6d2dc218da242e431acbf4ca86d4abfe29\": container with ID starting with 5f6a9bad1eb365dd9b60be89a1cb6f6d2dc218da242e431acbf4ca86d4abfe29 not found: ID does not exist" containerID="5f6a9bad1eb365dd9b60be89a1cb6f6d2dc218da242e431acbf4ca86d4abfe29" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.883660 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f6a9bad1eb365dd9b60be89a1cb6f6d2dc218da242e431acbf4ca86d4abfe29"} err="failed to get container status \"5f6a9bad1eb365dd9b60be89a1cb6f6d2dc218da242e431acbf4ca86d4abfe29\": rpc error: code = NotFound desc = could not find container \"5f6a9bad1eb365dd9b60be89a1cb6f6d2dc218da242e431acbf4ca86d4abfe29\": container with ID starting with 5f6a9bad1eb365dd9b60be89a1cb6f6d2dc218da242e431acbf4ca86d4abfe29 not found: ID does not exist" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.883679 4769 scope.go:117] "RemoveContainer" containerID="a944f1510e093512d9ce652d8344d9c4bcf2a8da0ff5318d2e11fa326845cb5e" Oct 06 08:36:50 crc kubenswrapper[4769]: E1006 08:36:50.883902 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a944f1510e093512d9ce652d8344d9c4bcf2a8da0ff5318d2e11fa326845cb5e\": container with ID starting with a944f1510e093512d9ce652d8344d9c4bcf2a8da0ff5318d2e11fa326845cb5e not found: ID does not exist" containerID="a944f1510e093512d9ce652d8344d9c4bcf2a8da0ff5318d2e11fa326845cb5e" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.883926 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a944f1510e093512d9ce652d8344d9c4bcf2a8da0ff5318d2e11fa326845cb5e"} err="failed to get container status \"a944f1510e093512d9ce652d8344d9c4bcf2a8da0ff5318d2e11fa326845cb5e\": rpc error: code = NotFound desc = could not find container \"a944f1510e093512d9ce652d8344d9c4bcf2a8da0ff5318d2e11fa326845cb5e\": container with ID starting with a944f1510e093512d9ce652d8344d9c4bcf2a8da0ff5318d2e11fa326845cb5e not found: ID does not exist" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.973249 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fdzrg/must-gather-km72z"] Oct 06 08:36:50 crc kubenswrapper[4769]: E1006 08:36:50.973723 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c069f6b3-fe41-4676-b4e8-56256837650a" containerName="extract-utilities" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.973741 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c069f6b3-fe41-4676-b4e8-56256837650a" containerName="extract-utilities" Oct 06 08:36:50 crc kubenswrapper[4769]: E1006 08:36:50.973758 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c069f6b3-fe41-4676-b4e8-56256837650a" containerName="registry-server" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.973766 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c069f6b3-fe41-4676-b4e8-56256837650a" containerName="registry-server" Oct 06 08:36:50 crc kubenswrapper[4769]: E1006 08:36:50.973777 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c069f6b3-fe41-4676-b4e8-56256837650a" containerName="extract-content" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.973785 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c069f6b3-fe41-4676-b4e8-56256837650a" containerName="extract-content" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.973967 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c069f6b3-fe41-4676-b4e8-56256837650a" containerName="registry-server" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.974942 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/must-gather-km72z" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.985844 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fdzrg/must-gather-km72z"] Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.987781 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fdzrg"/"kube-root-ca.crt" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.987800 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fdzrg"/"openshift-service-ca.crt" Oct 06 08:36:50 crc kubenswrapper[4769]: I1006 08:36:50.991308 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-fdzrg"/"default-dockercfg-sb8cb" Oct 06 08:36:51 crc kubenswrapper[4769]: I1006 08:36:51.084461 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmd5w\" (UniqueName: \"kubernetes.io/projected/bcda4cb1-625c-403f-b766-b03eb41ee198-kube-api-access-vmd5w\") pod \"must-gather-km72z\" (UID: \"bcda4cb1-625c-403f-b766-b03eb41ee198\") " pod="openshift-must-gather-fdzrg/must-gather-km72z" Oct 06 08:36:51 crc kubenswrapper[4769]: I1006 08:36:51.084589 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcda4cb1-625c-403f-b766-b03eb41ee198-must-gather-output\") pod \"must-gather-km72z\" (UID: \"bcda4cb1-625c-403f-b766-b03eb41ee198\") " pod="openshift-must-gather-fdzrg/must-gather-km72z" Oct 06 08:36:51 crc kubenswrapper[4769]: I1006 08:36:51.185918 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmd5w\" (UniqueName: \"kubernetes.io/projected/bcda4cb1-625c-403f-b766-b03eb41ee198-kube-api-access-vmd5w\") pod \"must-gather-km72z\" (UID: \"bcda4cb1-625c-403f-b766-b03eb41ee198\") " pod="openshift-must-gather-fdzrg/must-gather-km72z" Oct 06 08:36:51 crc kubenswrapper[4769]: I1006 08:36:51.185992 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcda4cb1-625c-403f-b766-b03eb41ee198-must-gather-output\") pod \"must-gather-km72z\" (UID: \"bcda4cb1-625c-403f-b766-b03eb41ee198\") " pod="openshift-must-gather-fdzrg/must-gather-km72z" Oct 06 08:36:51 crc kubenswrapper[4769]: I1006 08:36:51.186332 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcda4cb1-625c-403f-b766-b03eb41ee198-must-gather-output\") pod \"must-gather-km72z\" (UID: \"bcda4cb1-625c-403f-b766-b03eb41ee198\") " pod="openshift-must-gather-fdzrg/must-gather-km72z" Oct 06 08:36:51 crc kubenswrapper[4769]: I1006 08:36:51.204121 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmd5w\" (UniqueName: \"kubernetes.io/projected/bcda4cb1-625c-403f-b766-b03eb41ee198-kube-api-access-vmd5w\") pod \"must-gather-km72z\" (UID: \"bcda4cb1-625c-403f-b766-b03eb41ee198\") " pod="openshift-must-gather-fdzrg/must-gather-km72z" Oct 06 08:36:51 crc kubenswrapper[4769]: I1006 08:36:51.296384 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/must-gather-km72z" Oct 06 08:36:52 crc kubenswrapper[4769]: I1006 08:36:52.180982 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c069f6b3-fe41-4676-b4e8-56256837650a" path="/var/lib/kubelet/pods/c069f6b3-fe41-4676-b4e8-56256837650a/volumes" Oct 06 08:36:52 crc kubenswrapper[4769]: I1006 08:36:52.425672 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fdzrg/must-gather-km72z"] Oct 06 08:36:52 crc kubenswrapper[4769]: I1006 08:36:52.780119 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/must-gather-km72z" event={"ID":"bcda4cb1-625c-403f-b766-b03eb41ee198","Type":"ContainerStarted","Data":"e87c5e1d8e6f6a987b267a8dd7fddbe017dc3f04854c73eda30efdcd629e74aa"} Oct 06 08:37:00 crc kubenswrapper[4769]: I1006 08:37:00.886461 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/must-gather-km72z" event={"ID":"bcda4cb1-625c-403f-b766-b03eb41ee198","Type":"ContainerStarted","Data":"aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0"} Oct 06 08:37:00 crc kubenswrapper[4769]: I1006 08:37:00.887797 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/must-gather-km72z" event={"ID":"bcda4cb1-625c-403f-b766-b03eb41ee198","Type":"ContainerStarted","Data":"200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794"} Oct 06 08:37:04 crc kubenswrapper[4769]: E1006 08:37:04.106390 4769 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.5:54130->38.102.83.5:43105: read tcp 38.102.83.5:54130->38.102.83.5:43105: read: connection reset by peer Oct 06 08:37:04 crc kubenswrapper[4769]: I1006 08:37:04.833625 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fdzrg/must-gather-km72z" podStartSLOduration=8.18736292 podStartE2EDuration="14.833604635s" podCreationTimestamp="2025-10-06 08:36:50 +0000 UTC" firstStartedPulling="2025-10-06 08:36:52.441858082 +0000 UTC m=+4808.966139229" lastFinishedPulling="2025-10-06 08:36:59.088099797 +0000 UTC m=+4815.612380944" observedRunningTime="2025-10-06 08:37:00.907743273 +0000 UTC m=+4817.432024420" watchObservedRunningTime="2025-10-06 08:37:04.833604635 +0000 UTC m=+4821.357885782" Oct 06 08:37:04 crc kubenswrapper[4769]: I1006 08:37:04.834287 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fdzrg/crc-debug-txv4h"] Oct 06 08:37:04 crc kubenswrapper[4769]: I1006 08:37:04.835436 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/crc-debug-txv4h" Oct 06 08:37:04 crc kubenswrapper[4769]: I1006 08:37:04.892553 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb2hz\" (UniqueName: \"kubernetes.io/projected/fc2a4cda-b502-4a1d-b54f-2b5947c8202c-kube-api-access-tb2hz\") pod \"crc-debug-txv4h\" (UID: \"fc2a4cda-b502-4a1d-b54f-2b5947c8202c\") " pod="openshift-must-gather-fdzrg/crc-debug-txv4h" Oct 06 08:37:04 crc kubenswrapper[4769]: I1006 08:37:04.893003 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc2a4cda-b502-4a1d-b54f-2b5947c8202c-host\") pod \"crc-debug-txv4h\" (UID: \"fc2a4cda-b502-4a1d-b54f-2b5947c8202c\") " pod="openshift-must-gather-fdzrg/crc-debug-txv4h" Oct 06 08:37:04 crc kubenswrapper[4769]: I1006 08:37:04.994956 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb2hz\" (UniqueName: \"kubernetes.io/projected/fc2a4cda-b502-4a1d-b54f-2b5947c8202c-kube-api-access-tb2hz\") pod \"crc-debug-txv4h\" (UID: \"fc2a4cda-b502-4a1d-b54f-2b5947c8202c\") " pod="openshift-must-gather-fdzrg/crc-debug-txv4h" Oct 06 08:37:04 crc kubenswrapper[4769]: I1006 08:37:04.995130 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc2a4cda-b502-4a1d-b54f-2b5947c8202c-host\") pod \"crc-debug-txv4h\" (UID: \"fc2a4cda-b502-4a1d-b54f-2b5947c8202c\") " pod="openshift-must-gather-fdzrg/crc-debug-txv4h" Oct 06 08:37:04 crc kubenswrapper[4769]: I1006 08:37:04.995285 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc2a4cda-b502-4a1d-b54f-2b5947c8202c-host\") pod \"crc-debug-txv4h\" (UID: \"fc2a4cda-b502-4a1d-b54f-2b5947c8202c\") " pod="openshift-must-gather-fdzrg/crc-debug-txv4h" Oct 06 08:37:05 crc kubenswrapper[4769]: I1006 08:37:05.026523 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb2hz\" (UniqueName: \"kubernetes.io/projected/fc2a4cda-b502-4a1d-b54f-2b5947c8202c-kube-api-access-tb2hz\") pod \"crc-debug-txv4h\" (UID: \"fc2a4cda-b502-4a1d-b54f-2b5947c8202c\") " pod="openshift-must-gather-fdzrg/crc-debug-txv4h" Oct 06 08:37:05 crc kubenswrapper[4769]: I1006 08:37:05.157007 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/crc-debug-txv4h" Oct 06 08:37:05 crc kubenswrapper[4769]: W1006 08:37:05.213931 4769 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc2a4cda_b502_4a1d_b54f_2b5947c8202c.slice/crio-dc68606d15e8800a756b9c6c80d850f94635c75579c9d7d3d5633f6d1e89cea5 WatchSource:0}: Error finding container dc68606d15e8800a756b9c6c80d850f94635c75579c9d7d3d5633f6d1e89cea5: Status 404 returned error can't find the container with id dc68606d15e8800a756b9c6c80d850f94635c75579c9d7d3d5633f6d1e89cea5 Oct 06 08:37:05 crc kubenswrapper[4769]: I1006 08:37:05.929605 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/crc-debug-txv4h" event={"ID":"fc2a4cda-b502-4a1d-b54f-2b5947c8202c","Type":"ContainerStarted","Data":"dc68606d15e8800a756b9c6c80d850f94635c75579c9d7d3d5633f6d1e89cea5"} Oct 06 08:37:18 crc kubenswrapper[4769]: I1006 08:37:18.035950 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/crc-debug-txv4h" event={"ID":"fc2a4cda-b502-4a1d-b54f-2b5947c8202c","Type":"ContainerStarted","Data":"48d1296a831911359c4a5a8e0c3c5260a7532c50d3b7d6fd13d165e0672bd407"} Oct 06 08:37:18 crc kubenswrapper[4769]: I1006 08:37:18.051204 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fdzrg/crc-debug-txv4h" podStartSLOduration=1.939055762 podStartE2EDuration="14.051184238s" podCreationTimestamp="2025-10-06 08:37:04 +0000 UTC" firstStartedPulling="2025-10-06 08:37:05.216712477 +0000 UTC m=+4821.740993624" lastFinishedPulling="2025-10-06 08:37:17.328840953 +0000 UTC m=+4833.853122100" observedRunningTime="2025-10-06 08:37:18.048050831 +0000 UTC m=+4834.572331978" watchObservedRunningTime="2025-10-06 08:37:18.051184238 +0000 UTC m=+4834.575465385" Oct 06 08:38:07 crc kubenswrapper[4769]: I1006 08:38:07.450660 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58cc998976-964nm_52bdd792-befb-4b73-b231-9e8301f3806a/barbican-api/0.log" Oct 06 08:38:07 crc kubenswrapper[4769]: I1006 08:38:07.467320 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58cc998976-964nm_52bdd792-befb-4b73-b231-9e8301f3806a/barbican-api-log/0.log" Oct 06 08:38:07 crc kubenswrapper[4769]: I1006 08:38:07.645073 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77b6b965c4-r6bn4_8a39b78c-9254-4275-b3bd-3fc0f137272f/barbican-keystone-listener/0.log" Oct 06 08:38:07 crc kubenswrapper[4769]: I1006 08:38:07.714882 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-77b6b965c4-r6bn4_8a39b78c-9254-4275-b3bd-3fc0f137272f/barbican-keystone-listener-log/0.log" Oct 06 08:38:07 crc kubenswrapper[4769]: I1006 08:38:07.918755 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c48f574cf-j5sgk_cbc1704d-575e-4d1b-a09b-faac26f1faf2/barbican-worker/0.log" Oct 06 08:38:07 crc kubenswrapper[4769]: I1006 08:38:07.922391 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5c48f574cf-j5sgk_cbc1704d-575e-4d1b-a09b-faac26f1faf2/barbican-worker-log/0.log" Oct 06 08:38:08 crc kubenswrapper[4769]: I1006 08:38:08.098919 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-94xnr_a13d7d14-1273-4616-9260-fb702b0948f2/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 08:38:08 crc kubenswrapper[4769]: I1006 08:38:08.357045 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-knj8q_a3ccce16-cd72-46c0-ab4f-546d83bf38db/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 08:38:08 crc kubenswrapper[4769]: I1006 08:38:08.408360 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kvn9d_d3e88895-8ab2-4b5c-a4ea-60cdeb335a77/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 08:38:09 crc kubenswrapper[4769]: I1006 08:38:09.059864 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-mdnlj_99cfdb3d-0fd9-47a4-b6af-70f78b733696/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 08:38:09 crc kubenswrapper[4769]: I1006 08:38:09.300097 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-mtpb7_761139da-805f-4c7e-a9af-6dfd529df0d5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 08:38:09 crc kubenswrapper[4769]: I1006 08:38:09.392679 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-n44w4_4df5bcab-0094-4bf9-bb2d-8e1376e55260/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 08:38:09 crc kubenswrapper[4769]: I1006 08:38:09.595379 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-rkvcc_67fef4ae-c185-4c7d-abba-2410461c1078/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 08:38:09 crc kubenswrapper[4769]: I1006 08:38:09.826882 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5b3be702-b284-41a7-8e76-a09139eed2b4/proxy-httpd/0.log" Oct 06 08:38:09 crc kubenswrapper[4769]: I1006 08:38:09.828148 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5b3be702-b284-41a7-8e76-a09139eed2b4/ceilometer-notification-agent/0.log" Oct 06 08:38:09 crc kubenswrapper[4769]: I1006 08:38:09.853824 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5b3be702-b284-41a7-8e76-a09139eed2b4/ceilometer-central-agent/0.log" Oct 06 08:38:10 crc kubenswrapper[4769]: I1006 08:38:10.046701 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5b3be702-b284-41a7-8e76-a09139eed2b4/sg-core/0.log" Oct 06 08:38:10 crc kubenswrapper[4769]: I1006 08:38:10.238867 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f07f5e2f-83e9-4cfe-a9b5-d372bfecc869/cinder-api/0.log" Oct 06 08:38:10 crc kubenswrapper[4769]: I1006 08:38:10.315210 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f07f5e2f-83e9-4cfe-a9b5-d372bfecc869/cinder-api-log/0.log" Oct 06 08:38:10 crc kubenswrapper[4769]: I1006 08:38:10.469891 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_602f4ddb-1ad0-440b-b402-0179dd5604b3/cinder-scheduler/0.log" Oct 06 08:38:10 crc kubenswrapper[4769]: I1006 08:38:10.523725 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_602f4ddb-1ad0-440b-b402-0179dd5604b3/probe/0.log" Oct 06 08:38:10 crc kubenswrapper[4769]: I1006 08:38:10.706802 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-698c57b6fc-kpzq9_17a31b12-440b-4bad-87f5-176edddf3ba4/init/0.log" Oct 06 08:38:10 crc kubenswrapper[4769]: I1006 08:38:10.842130 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-698c57b6fc-kpzq9_17a31b12-440b-4bad-87f5-176edddf3ba4/init/0.log" Oct 06 08:38:10 crc kubenswrapper[4769]: I1006 08:38:10.856920 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-698c57b6fc-kpzq9_17a31b12-440b-4bad-87f5-176edddf3ba4/dnsmasq-dns/0.log" Oct 06 08:38:10 crc kubenswrapper[4769]: I1006 08:38:10.955113 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5f9369da-2ad7-4cd1-a161-41ece66008e0/glance-httpd/0.log" Oct 06 08:38:11 crc kubenswrapper[4769]: I1006 08:38:11.059119 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_5f9369da-2ad7-4cd1-a161-41ece66008e0/glance-log/0.log" Oct 06 08:38:11 crc kubenswrapper[4769]: I1006 08:38:11.167462 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3f6baf3d-2817-453a-b212-4b8860056e9f/glance-log/0.log" Oct 06 08:38:11 crc kubenswrapper[4769]: I1006 08:38:11.188069 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3f6baf3d-2817-453a-b212-4b8860056e9f/glance-httpd/0.log" Oct 06 08:38:11 crc kubenswrapper[4769]: I1006 08:38:11.411126 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-655c85cd69-rkn7x_72492080-7681-4cf3-b84b-5a4d33f529df/keystone-api/0.log" Oct 06 08:38:11 crc kubenswrapper[4769]: I1006 08:38:11.479614 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29328961-lhpgc_45fb0d84-9e40-482d-82dc-f2040969398d/keystone-cron/0.log" Oct 06 08:38:11 crc kubenswrapper[4769]: I1006 08:38:11.604779 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d8005a18-8df9-47d9-a446-9e2b18d04409/kube-state-metrics/0.log" Oct 06 08:38:11 crc kubenswrapper[4769]: I1006 08:38:11.880484 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5b48758c79-6xdwf_7bf270e0-baab-4f7d-ae98-8e3776b1518d/neutron-api/0.log" Oct 06 08:38:11 crc kubenswrapper[4769]: I1006 08:38:11.962327 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5b48758c79-6xdwf_7bf270e0-baab-4f7d-ae98-8e3776b1518d/neutron-httpd/0.log" Oct 06 08:38:12 crc kubenswrapper[4769]: I1006 08:38:12.443333 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7a3f83fc-48c6-4323-83be-b39bc9529799/nova-api-log/0.log" Oct 06 08:38:12 crc kubenswrapper[4769]: I1006 08:38:12.853480 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7a3f83fc-48c6-4323-83be-b39bc9529799/nova-api-api/0.log" Oct 06 08:38:12 crc kubenswrapper[4769]: I1006 08:38:12.857587 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_02b2e9f3-0bde-4259-bb5d-1dd15f3e1e62/nova-cell0-conductor-conductor/0.log" Oct 06 08:38:13 crc kubenswrapper[4769]: I1006 08:38:13.058394 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_80670f73-8131-40f1-8f91-a9291f87f615/nova-cell1-conductor-conductor/0.log" Oct 06 08:38:13 crc kubenswrapper[4769]: I1006 08:38:13.242635 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5554fc29-0173-4e76-aa22-355c4f3725d2/nova-cell1-novncproxy-novncproxy/0.log" Oct 06 08:38:13 crc kubenswrapper[4769]: I1006 08:38:13.402790 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b0f82871-7da7-45c1-abda-273fa82504af/nova-metadata-log/0.log" Oct 06 08:38:13 crc kubenswrapper[4769]: I1006 08:38:13.865554 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_79aebe36-4b56-4b00-a3ff-0dd2965702c8/nova-scheduler-scheduler/0.log" Oct 06 08:38:14 crc kubenswrapper[4769]: I1006 08:38:14.105590 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_71170300-e629-4a76-8960-29bdc328edf5/mysql-bootstrap/0.log" Oct 06 08:38:14 crc kubenswrapper[4769]: I1006 08:38:14.232886 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_71170300-e629-4a76-8960-29bdc328edf5/mysql-bootstrap/0.log" Oct 06 08:38:14 crc kubenswrapper[4769]: I1006 08:38:14.360971 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_71170300-e629-4a76-8960-29bdc328edf5/galera/0.log" Oct 06 08:38:14 crc kubenswrapper[4769]: I1006 08:38:14.614257 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_687dbbb5-7929-4674-9ac4-77ec7ff8e424/mysql-bootstrap/0.log" Oct 06 08:38:14 crc kubenswrapper[4769]: I1006 08:38:14.768522 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_687dbbb5-7929-4674-9ac4-77ec7ff8e424/mysql-bootstrap/0.log" Oct 06 08:38:14 crc kubenswrapper[4769]: I1006 08:38:14.881289 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_687dbbb5-7929-4674-9ac4-77ec7ff8e424/galera/0.log" Oct 06 08:38:15 crc kubenswrapper[4769]: I1006 08:38:15.080762 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_6f346861-ee62-493c-82bd-2ea7fa7347e6/openstackclient/0.log" Oct 06 08:38:15 crc kubenswrapper[4769]: I1006 08:38:15.307055 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vh747_780fca18-2e85-4252-9b00-326e46d26eae/openstack-network-exporter/0.log" Oct 06 08:38:15 crc kubenswrapper[4769]: I1006 08:38:15.586027 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4qmzc_530bb6cc-e4fa-42bf-88aa-38020d3b5513/ovsdb-server-init/0.log" Oct 06 08:38:15 crc kubenswrapper[4769]: I1006 08:38:15.754571 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b0f82871-7da7-45c1-abda-273fa82504af/nova-metadata-metadata/0.log" Oct 06 08:38:15 crc kubenswrapper[4769]: I1006 08:38:15.813600 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4qmzc_530bb6cc-e4fa-42bf-88aa-38020d3b5513/ovsdb-server-init/0.log" Oct 06 08:38:15 crc kubenswrapper[4769]: I1006 08:38:15.881071 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4qmzc_530bb6cc-e4fa-42bf-88aa-38020d3b5513/ovs-vswitchd/0.log" Oct 06 08:38:16 crc kubenswrapper[4769]: I1006 08:38:16.017455 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-4qmzc_530bb6cc-e4fa-42bf-88aa-38020d3b5513/ovsdb-server/0.log" Oct 06 08:38:16 crc kubenswrapper[4769]: I1006 08:38:16.475628 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rdj69_7666a29e-0c83-4099-ae6d-1fc333d3c630/ovn-controller/0.log" Oct 06 08:38:16 crc kubenswrapper[4769]: I1006 08:38:16.494377 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1e249936-c0cd-44da-9ad9-c11c17090006/openstack-network-exporter/0.log" Oct 06 08:38:16 crc kubenswrapper[4769]: I1006 08:38:16.724877 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1e249936-c0cd-44da-9ad9-c11c17090006/ovn-northd/0.log" Oct 06 08:38:16 crc kubenswrapper[4769]: I1006 08:38:16.782844 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1687c91c-da3f-4399-a0a8-b9d1769ddd30/openstack-network-exporter/0.log" Oct 06 08:38:16 crc kubenswrapper[4769]: I1006 08:38:16.974411 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_1687c91c-da3f-4399-a0a8-b9d1769ddd30/ovsdbserver-nb/0.log" Oct 06 08:38:17 crc kubenswrapper[4769]: I1006 08:38:17.020025 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_91209a46-4315-4dd2-91a5-7658b454d9ec/openstack-network-exporter/0.log" Oct 06 08:38:17 crc kubenswrapper[4769]: I1006 08:38:17.207858 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_91209a46-4315-4dd2-91a5-7658b454d9ec/ovsdbserver-sb/0.log" Oct 06 08:38:17 crc kubenswrapper[4769]: I1006 08:38:17.347806 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9b886c44b-zxp8t_088c3631-1983-4fb6-8e21-0c5b6f7b11c2/placement-api/0.log" Oct 06 08:38:17 crc kubenswrapper[4769]: I1006 08:38:17.494517 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9b886c44b-zxp8t_088c3631-1983-4fb6-8e21-0c5b6f7b11c2/placement-log/0.log" Oct 06 08:38:17 crc kubenswrapper[4769]: I1006 08:38:17.621582 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6/setup-container/0.log" Oct 06 08:38:18 crc kubenswrapper[4769]: I1006 08:38:18.104427 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6/setup-container/0.log" Oct 06 08:38:18 crc kubenswrapper[4769]: I1006 08:38:18.221944 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8b20fe41-af9c-40f2-aeae-ee1cd6c56bf6/rabbitmq/0.log" Oct 06 08:38:18 crc kubenswrapper[4769]: I1006 08:38:18.367690 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e5db2de8-5580-43f3-aa10-3a1cc7806fba/setup-container/0.log" Oct 06 08:38:18 crc kubenswrapper[4769]: I1006 08:38:18.566508 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e5db2de8-5580-43f3-aa10-3a1cc7806fba/rabbitmq/0.log" Oct 06 08:38:18 crc kubenswrapper[4769]: I1006 08:38:18.572602 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_e5db2de8-5580-43f3-aa10-3a1cc7806fba/setup-container/0.log" Oct 06 08:38:18 crc kubenswrapper[4769]: I1006 08:38:18.806847 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-gm4q6_c44a1d31-d466-4c44-b8ea-088f2011e9b3/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 08:38:18 crc kubenswrapper[4769]: I1006 08:38:18.920624 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-grtfw_1be57ab5-e167-4a82-a668-0ac08f6d9a18/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 06 08:38:19 crc kubenswrapper[4769]: I1006 08:38:19.159373 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7b475fd569-c56qk_ef915e60-c2fd-4336-84f3-62b2cfa713a9/proxy-server/0.log" Oct 06 08:38:19 crc kubenswrapper[4769]: I1006 08:38:19.336760 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-5dtjw_565d366a-8723-4b82-8b01-cd4d05b66e18/swift-ring-rebalance/0.log" Oct 06 08:38:19 crc kubenswrapper[4769]: I1006 08:38:19.401320 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7b475fd569-c56qk_ef915e60-c2fd-4336-84f3-62b2cfa713a9/proxy-httpd/0.log" Oct 06 08:38:19 crc kubenswrapper[4769]: I1006 08:38:19.610311 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/account-auditor/0.log" Oct 06 08:38:19 crc kubenswrapper[4769]: I1006 08:38:19.648338 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/account-reaper/0.log" Oct 06 08:38:19 crc kubenswrapper[4769]: I1006 08:38:19.796970 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/account-server/0.log" Oct 06 08:38:19 crc kubenswrapper[4769]: I1006 08:38:19.836947 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/account-replicator/0.log" Oct 06 08:38:19 crc kubenswrapper[4769]: I1006 08:38:19.874502 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/container-auditor/0.log" Oct 06 08:38:20 crc kubenswrapper[4769]: I1006 08:38:20.019188 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/container-server/0.log" Oct 06 08:38:20 crc kubenswrapper[4769]: I1006 08:38:20.055107 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/container-replicator/0.log" Oct 06 08:38:20 crc kubenswrapper[4769]: I1006 08:38:20.082323 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/container-updater/0.log" Oct 06 08:38:20 crc kubenswrapper[4769]: I1006 08:38:20.243972 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/object-auditor/0.log" Oct 06 08:38:20 crc kubenswrapper[4769]: I1006 08:38:20.280707 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/object-expirer/0.log" Oct 06 08:38:20 crc kubenswrapper[4769]: I1006 08:38:20.320047 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/object-replicator/0.log" Oct 06 08:38:20 crc kubenswrapper[4769]: I1006 08:38:20.460237 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/object-server/0.log" Oct 06 08:38:20 crc kubenswrapper[4769]: I1006 08:38:20.492358 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/object-updater/0.log" Oct 06 08:38:20 crc kubenswrapper[4769]: I1006 08:38:20.562335 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/rsync/0.log" Oct 06 08:38:20 crc kubenswrapper[4769]: I1006 08:38:20.734165 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_91c9fe86-3c6f-485c-94a9-5adc4a88d14f/swift-recon-cron/0.log" Oct 06 08:38:22 crc kubenswrapper[4769]: I1006 08:38:22.245268 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:38:22 crc kubenswrapper[4769]: I1006 08:38:22.245593 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:38:27 crc kubenswrapper[4769]: I1006 08:38:27.305685 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c804d714-a4d7-487d-b52a-bb2e1a47f36b/memcached/0.log" Oct 06 08:38:50 crc kubenswrapper[4769]: I1006 08:38:50.874406 4769 generic.go:334] "Generic (PLEG): container finished" podID="fc2a4cda-b502-4a1d-b54f-2b5947c8202c" containerID="48d1296a831911359c4a5a8e0c3c5260a7532c50d3b7d6fd13d165e0672bd407" exitCode=0 Oct 06 08:38:50 crc kubenswrapper[4769]: I1006 08:38:50.874615 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/crc-debug-txv4h" event={"ID":"fc2a4cda-b502-4a1d-b54f-2b5947c8202c","Type":"ContainerDied","Data":"48d1296a831911359c4a5a8e0c3c5260a7532c50d3b7d6fd13d165e0672bd407"} Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.006302 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/crc-debug-txv4h" Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.035338 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fdzrg/crc-debug-txv4h"] Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.041674 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fdzrg/crc-debug-txv4h"] Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.202529 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc2a4cda-b502-4a1d-b54f-2b5947c8202c-host\") pod \"fc2a4cda-b502-4a1d-b54f-2b5947c8202c\" (UID: \"fc2a4cda-b502-4a1d-b54f-2b5947c8202c\") " Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.202665 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2a4cda-b502-4a1d-b54f-2b5947c8202c-host" (OuterVolumeSpecName: "host") pod "fc2a4cda-b502-4a1d-b54f-2b5947c8202c" (UID: "fc2a4cda-b502-4a1d-b54f-2b5947c8202c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.202985 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb2hz\" (UniqueName: \"kubernetes.io/projected/fc2a4cda-b502-4a1d-b54f-2b5947c8202c-kube-api-access-tb2hz\") pod \"fc2a4cda-b502-4a1d-b54f-2b5947c8202c\" (UID: \"fc2a4cda-b502-4a1d-b54f-2b5947c8202c\") " Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.203378 4769 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc2a4cda-b502-4a1d-b54f-2b5947c8202c-host\") on node \"crc\" DevicePath \"\"" Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.208470 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2a4cda-b502-4a1d-b54f-2b5947c8202c-kube-api-access-tb2hz" (OuterVolumeSpecName: "kube-api-access-tb2hz") pod "fc2a4cda-b502-4a1d-b54f-2b5947c8202c" (UID: "fc2a4cda-b502-4a1d-b54f-2b5947c8202c"). InnerVolumeSpecName "kube-api-access-tb2hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.246093 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.246176 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.304546 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb2hz\" (UniqueName: \"kubernetes.io/projected/fc2a4cda-b502-4a1d-b54f-2b5947c8202c-kube-api-access-tb2hz\") on node \"crc\" DevicePath \"\"" Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.890245 4769 scope.go:117] "RemoveContainer" containerID="48d1296a831911359c4a5a8e0c3c5260a7532c50d3b7d6fd13d165e0672bd407" Oct 06 08:38:52 crc kubenswrapper[4769]: I1006 08:38:52.890317 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/crc-debug-txv4h" Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.190592 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fdzrg/crc-debug-8r9sh"] Oct 06 08:38:53 crc kubenswrapper[4769]: E1006 08:38:53.190958 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2a4cda-b502-4a1d-b54f-2b5947c8202c" containerName="container-00" Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.190970 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2a4cda-b502-4a1d-b54f-2b5947c8202c" containerName="container-00" Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.191152 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2a4cda-b502-4a1d-b54f-2b5947c8202c" containerName="container-00" Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.191800 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.320991 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sqzx\" (UniqueName: \"kubernetes.io/projected/8c6d48e8-9c69-460a-ab61-389877b7bc28-kube-api-access-6sqzx\") pod \"crc-debug-8r9sh\" (UID: \"8c6d48e8-9c69-460a-ab61-389877b7bc28\") " pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.321393 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c6d48e8-9c69-460a-ab61-389877b7bc28-host\") pod \"crc-debug-8r9sh\" (UID: \"8c6d48e8-9c69-460a-ab61-389877b7bc28\") " pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.423679 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sqzx\" (UniqueName: \"kubernetes.io/projected/8c6d48e8-9c69-460a-ab61-389877b7bc28-kube-api-access-6sqzx\") pod \"crc-debug-8r9sh\" (UID: \"8c6d48e8-9c69-460a-ab61-389877b7bc28\") " pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.423746 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c6d48e8-9c69-460a-ab61-389877b7bc28-host\") pod \"crc-debug-8r9sh\" (UID: \"8c6d48e8-9c69-460a-ab61-389877b7bc28\") " pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.423820 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c6d48e8-9c69-460a-ab61-389877b7bc28-host\") pod \"crc-debug-8r9sh\" (UID: \"8c6d48e8-9c69-460a-ab61-389877b7bc28\") " pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.443434 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sqzx\" (UniqueName: \"kubernetes.io/projected/8c6d48e8-9c69-460a-ab61-389877b7bc28-kube-api-access-6sqzx\") pod \"crc-debug-8r9sh\" (UID: \"8c6d48e8-9c69-460a-ab61-389877b7bc28\") " pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.511931 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.900603 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" event={"ID":"8c6d48e8-9c69-460a-ab61-389877b7bc28","Type":"ContainerStarted","Data":"5939fcb7eb95a4013fa012a37677596ab63226e8642fff0b66a31a9c2b2913ae"} Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.900919 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" event={"ID":"8c6d48e8-9c69-460a-ab61-389877b7bc28","Type":"ContainerStarted","Data":"5aaa8c6e028ccc214d268a592470e41cb79e6d0eaaefbf4d2b8d291a0f3b633a"} Oct 06 08:38:53 crc kubenswrapper[4769]: I1006 08:38:53.913170 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" podStartSLOduration=0.913147519 podStartE2EDuration="913.147519ms" podCreationTimestamp="2025-10-06 08:38:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-06 08:38:53.911822803 +0000 UTC m=+4930.436103960" watchObservedRunningTime="2025-10-06 08:38:53.913147519 +0000 UTC m=+4930.437428676" Oct 06 08:38:54 crc kubenswrapper[4769]: I1006 08:38:54.176808 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc2a4cda-b502-4a1d-b54f-2b5947c8202c" path="/var/lib/kubelet/pods/fc2a4cda-b502-4a1d-b54f-2b5947c8202c/volumes" Oct 06 08:38:54 crc kubenswrapper[4769]: I1006 08:38:54.916279 4769 generic.go:334] "Generic (PLEG): container finished" podID="8c6d48e8-9c69-460a-ab61-389877b7bc28" containerID="5939fcb7eb95a4013fa012a37677596ab63226e8642fff0b66a31a9c2b2913ae" exitCode=0 Oct 06 08:38:54 crc kubenswrapper[4769]: I1006 08:38:54.916332 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" event={"ID":"8c6d48e8-9c69-460a-ab61-389877b7bc28","Type":"ContainerDied","Data":"5939fcb7eb95a4013fa012a37677596ab63226e8642fff0b66a31a9c2b2913ae"} Oct 06 08:38:56 crc kubenswrapper[4769]: I1006 08:38:56.272743 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" Oct 06 08:38:56 crc kubenswrapper[4769]: I1006 08:38:56.372928 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sqzx\" (UniqueName: \"kubernetes.io/projected/8c6d48e8-9c69-460a-ab61-389877b7bc28-kube-api-access-6sqzx\") pod \"8c6d48e8-9c69-460a-ab61-389877b7bc28\" (UID: \"8c6d48e8-9c69-460a-ab61-389877b7bc28\") " Oct 06 08:38:56 crc kubenswrapper[4769]: I1006 08:38:56.373033 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c6d48e8-9c69-460a-ab61-389877b7bc28-host\") pod \"8c6d48e8-9c69-460a-ab61-389877b7bc28\" (UID: \"8c6d48e8-9c69-460a-ab61-389877b7bc28\") " Oct 06 08:38:56 crc kubenswrapper[4769]: I1006 08:38:56.373332 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6d48e8-9c69-460a-ab61-389877b7bc28-host" (OuterVolumeSpecName: "host") pod "8c6d48e8-9c69-460a-ab61-389877b7bc28" (UID: "8c6d48e8-9c69-460a-ab61-389877b7bc28"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 08:38:56 crc kubenswrapper[4769]: I1006 08:38:56.373626 4769 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8c6d48e8-9c69-460a-ab61-389877b7bc28-host\") on node \"crc\" DevicePath \"\"" Oct 06 08:38:56 crc kubenswrapper[4769]: I1006 08:38:56.388882 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c6d48e8-9c69-460a-ab61-389877b7bc28-kube-api-access-6sqzx" (OuterVolumeSpecName: "kube-api-access-6sqzx") pod "8c6d48e8-9c69-460a-ab61-389877b7bc28" (UID: "8c6d48e8-9c69-460a-ab61-389877b7bc28"). InnerVolumeSpecName "kube-api-access-6sqzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:38:56 crc kubenswrapper[4769]: I1006 08:38:56.474737 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sqzx\" (UniqueName: \"kubernetes.io/projected/8c6d48e8-9c69-460a-ab61-389877b7bc28-kube-api-access-6sqzx\") on node \"crc\" DevicePath \"\"" Oct 06 08:38:56 crc kubenswrapper[4769]: I1006 08:38:56.936094 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" Oct 06 08:38:56 crc kubenswrapper[4769]: I1006 08:38:56.941476 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/crc-debug-8r9sh" event={"ID":"8c6d48e8-9c69-460a-ab61-389877b7bc28","Type":"ContainerDied","Data":"5aaa8c6e028ccc214d268a592470e41cb79e6d0eaaefbf4d2b8d291a0f3b633a"} Oct 06 08:38:56 crc kubenswrapper[4769]: I1006 08:38:56.941530 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aaa8c6e028ccc214d268a592470e41cb79e6d0eaaefbf4d2b8d291a0f3b633a" Oct 06 08:39:00 crc kubenswrapper[4769]: I1006 08:39:00.129934 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fdzrg/crc-debug-8r9sh"] Oct 06 08:39:00 crc kubenswrapper[4769]: I1006 08:39:00.136727 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fdzrg/crc-debug-8r9sh"] Oct 06 08:39:00 crc kubenswrapper[4769]: I1006 08:39:00.179024 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c6d48e8-9c69-460a-ab61-389877b7bc28" path="/var/lib/kubelet/pods/8c6d48e8-9c69-460a-ab61-389877b7bc28/volumes" Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.304456 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fdzrg/crc-debug-wjjjn"] Oct 06 08:39:01 crc kubenswrapper[4769]: E1006 08:39:01.305495 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6d48e8-9c69-460a-ab61-389877b7bc28" containerName="container-00" Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.305511 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6d48e8-9c69-460a-ab61-389877b7bc28" containerName="container-00" Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.305986 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6d48e8-9c69-460a-ab61-389877b7bc28" containerName="container-00" Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.306926 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/crc-debug-wjjjn" Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.451151 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x222k\" (UniqueName: \"kubernetes.io/projected/7700af75-4fe3-4a15-8bcb-79f9014d4489-kube-api-access-x222k\") pod \"crc-debug-wjjjn\" (UID: \"7700af75-4fe3-4a15-8bcb-79f9014d4489\") " pod="openshift-must-gather-fdzrg/crc-debug-wjjjn" Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.451584 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7700af75-4fe3-4a15-8bcb-79f9014d4489-host\") pod \"crc-debug-wjjjn\" (UID: \"7700af75-4fe3-4a15-8bcb-79f9014d4489\") " pod="openshift-must-gather-fdzrg/crc-debug-wjjjn" Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.553206 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x222k\" (UniqueName: \"kubernetes.io/projected/7700af75-4fe3-4a15-8bcb-79f9014d4489-kube-api-access-x222k\") pod \"crc-debug-wjjjn\" (UID: \"7700af75-4fe3-4a15-8bcb-79f9014d4489\") " pod="openshift-must-gather-fdzrg/crc-debug-wjjjn" Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.553256 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7700af75-4fe3-4a15-8bcb-79f9014d4489-host\") pod \"crc-debug-wjjjn\" (UID: \"7700af75-4fe3-4a15-8bcb-79f9014d4489\") " pod="openshift-must-gather-fdzrg/crc-debug-wjjjn" Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.553376 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7700af75-4fe3-4a15-8bcb-79f9014d4489-host\") pod \"crc-debug-wjjjn\" (UID: \"7700af75-4fe3-4a15-8bcb-79f9014d4489\") " pod="openshift-must-gather-fdzrg/crc-debug-wjjjn" Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.573547 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x222k\" (UniqueName: \"kubernetes.io/projected/7700af75-4fe3-4a15-8bcb-79f9014d4489-kube-api-access-x222k\") pod \"crc-debug-wjjjn\" (UID: \"7700af75-4fe3-4a15-8bcb-79f9014d4489\") " pod="openshift-must-gather-fdzrg/crc-debug-wjjjn" Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.625905 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/crc-debug-wjjjn" Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.980243 4769 generic.go:334] "Generic (PLEG): container finished" podID="7700af75-4fe3-4a15-8bcb-79f9014d4489" containerID="eb3a98a30b5f8f1e4b97d8c7b68d4dc414bd527b1251568566f8dae2385ec6b6" exitCode=0 Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.980313 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/crc-debug-wjjjn" event={"ID":"7700af75-4fe3-4a15-8bcb-79f9014d4489","Type":"ContainerDied","Data":"eb3a98a30b5f8f1e4b97d8c7b68d4dc414bd527b1251568566f8dae2385ec6b6"} Oct 06 08:39:01 crc kubenswrapper[4769]: I1006 08:39:01.980573 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/crc-debug-wjjjn" event={"ID":"7700af75-4fe3-4a15-8bcb-79f9014d4489","Type":"ContainerStarted","Data":"b7581d458c8db92d894003382e38a73dd9ee8d19d2d89fc70d0f7d269c23421b"} Oct 06 08:39:02 crc kubenswrapper[4769]: I1006 08:39:02.015534 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fdzrg/crc-debug-wjjjn"] Oct 06 08:39:02 crc kubenswrapper[4769]: I1006 08:39:02.021947 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fdzrg/crc-debug-wjjjn"] Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.094997 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/crc-debug-wjjjn" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.280790 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7700af75-4fe3-4a15-8bcb-79f9014d4489-host\") pod \"7700af75-4fe3-4a15-8bcb-79f9014d4489\" (UID: \"7700af75-4fe3-4a15-8bcb-79f9014d4489\") " Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.280927 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7700af75-4fe3-4a15-8bcb-79f9014d4489-host" (OuterVolumeSpecName: "host") pod "7700af75-4fe3-4a15-8bcb-79f9014d4489" (UID: "7700af75-4fe3-4a15-8bcb-79f9014d4489"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.281011 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x222k\" (UniqueName: \"kubernetes.io/projected/7700af75-4fe3-4a15-8bcb-79f9014d4489-kube-api-access-x222k\") pod \"7700af75-4fe3-4a15-8bcb-79f9014d4489\" (UID: \"7700af75-4fe3-4a15-8bcb-79f9014d4489\") " Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.281636 4769 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7700af75-4fe3-4a15-8bcb-79f9014d4489-host\") on node \"crc\" DevicePath \"\"" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.286480 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7700af75-4fe3-4a15-8bcb-79f9014d4489-kube-api-access-x222k" (OuterVolumeSpecName: "kube-api-access-x222k") pod "7700af75-4fe3-4a15-8bcb-79f9014d4489" (UID: "7700af75-4fe3-4a15-8bcb-79f9014d4489"). InnerVolumeSpecName "kube-api-access-x222k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.383296 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x222k\" (UniqueName: \"kubernetes.io/projected/7700af75-4fe3-4a15-8bcb-79f9014d4489-kube-api-access-x222k\") on node \"crc\" DevicePath \"\"" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.486355 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7_e3246024-d919-4ff6-abf7-2779f6622954/util/0.log" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.646572 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7_e3246024-d919-4ff6-abf7-2779f6622954/util/0.log" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.686709 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7_e3246024-d919-4ff6-abf7-2779f6622954/pull/0.log" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.698201 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7_e3246024-d919-4ff6-abf7-2779f6622954/pull/0.log" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.858619 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7_e3246024-d919-4ff6-abf7-2779f6622954/extract/0.log" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.878878 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7_e3246024-d919-4ff6-abf7-2779f6622954/util/0.log" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.879798 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0da5b2e9368362304c733e302da704023e73b0b2df8ed109170f4705a8c8sb7_e3246024-d919-4ff6-abf7-2779f6622954/pull/0.log" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.997403 4769 scope.go:117] "RemoveContainer" containerID="eb3a98a30b5f8f1e4b97d8c7b68d4dc414bd527b1251568566f8dae2385ec6b6" Oct 06 08:39:03 crc kubenswrapper[4769]: I1006 08:39:03.998259 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/crc-debug-wjjjn" Oct 06 08:39:04 crc kubenswrapper[4769]: I1006 08:39:04.038882 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5b974f6766-q767n_82e41047-a211-4e78-89bd-3c3b6fbf17c6/kube-rbac-proxy/0.log" Oct 06 08:39:04 crc kubenswrapper[4769]: I1006 08:39:04.100562 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-84bd8f6848-9k7gr_2861ed50-c4c4-4ce6-84bf-975f1ca89fd7/kube-rbac-proxy/0.log" Oct 06 08:39:04 crc kubenswrapper[4769]: I1006 08:39:04.163933 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5b974f6766-q767n_82e41047-a211-4e78-89bd-3c3b6fbf17c6/manager/0.log" Oct 06 08:39:04 crc kubenswrapper[4769]: I1006 08:39:04.177817 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7700af75-4fe3-4a15-8bcb-79f9014d4489" path="/var/lib/kubelet/pods/7700af75-4fe3-4a15-8bcb-79f9014d4489/volumes" Oct 06 08:39:04 crc kubenswrapper[4769]: I1006 08:39:04.294558 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-84bd8f6848-9k7gr_2861ed50-c4c4-4ce6-84bf-975f1ca89fd7/manager/0.log" Oct 06 08:39:04 crc kubenswrapper[4769]: I1006 08:39:04.313316 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-58d86cd59d-phrqc_702ea655-638e-4db1-bf92-4531efcb7728/kube-rbac-proxy/0.log" Oct 06 08:39:04 crc kubenswrapper[4769]: I1006 08:39:04.320520 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-58d86cd59d-phrqc_702ea655-638e-4db1-bf92-4531efcb7728/manager/0.log" Oct 06 08:39:04 crc kubenswrapper[4769]: I1006 08:39:04.464736 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-698456cdc6-h2gcm_362f2d56-78f2-4179-842a-cc3b7e77b8bf/kube-rbac-proxy/0.log" Oct 06 08:39:04 crc kubenswrapper[4769]: I1006 08:39:04.518719 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-698456cdc6-h2gcm_362f2d56-78f2-4179-842a-cc3b7e77b8bf/manager/0.log" Oct 06 08:39:04 crc kubenswrapper[4769]: I1006 08:39:04.604271 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5c497dbdb-tg9wh_b13c6907-c560-4be1-a0b6-5bb302f02f3e/kube-rbac-proxy/0.log" Oct 06 08:39:04 crc kubenswrapper[4769]: I1006 08:39:04.663587 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5c497dbdb-tg9wh_b13c6907-c560-4be1-a0b6-5bb302f02f3e/manager/0.log" Oct 06 08:39:04 crc kubenswrapper[4769]: I1006 08:39:04.697965 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6675647785-zwnqv_5c160dde-ed5f-4e2e-89f8-c793ee675f54/kube-rbac-proxy/0.log" Oct 06 08:39:05 crc kubenswrapper[4769]: I1006 08:39:05.083902 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6675647785-zwnqv_5c160dde-ed5f-4e2e-89f8-c793ee675f54/manager/0.log" Oct 06 08:39:05 crc kubenswrapper[4769]: I1006 08:39:05.161771 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-84788b6bc5-q7mbs_d27cfd7b-7bb0-462e-b6c1-c7537b34511a/kube-rbac-proxy/0.log" Oct 06 08:39:05 crc kubenswrapper[4769]: I1006 08:39:05.270156 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-84788b6bc5-q7mbs_d27cfd7b-7bb0-462e-b6c1-c7537b34511a/manager/0.log" Oct 06 08:39:05 crc kubenswrapper[4769]: I1006 08:39:05.336983 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f5894c49f-2djd8_0dcb449d-5590-431e-a8d5-7cb1f6c38a9d/kube-rbac-proxy/0.log" Oct 06 08:39:05 crc kubenswrapper[4769]: I1006 08:39:05.378059 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f5894c49f-2djd8_0dcb449d-5590-431e-a8d5-7cb1f6c38a9d/manager/0.log" Oct 06 08:39:05 crc kubenswrapper[4769]: I1006 08:39:05.507586 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-57c9cdcf57-wjk8j_62b99e9f-d5e1-4cbd-9d75-e9f9938d0a0a/kube-rbac-proxy/0.log" Oct 06 08:39:05 crc kubenswrapper[4769]: I1006 08:39:05.606643 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-57c9cdcf57-wjk8j_62b99e9f-d5e1-4cbd-9d75-e9f9938d0a0a/manager/0.log" Oct 06 08:39:05 crc kubenswrapper[4769]: I1006 08:39:05.684346 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7cb48dbc-bq7vc_02615b46-f027-4aef-b28e-8c4b1c7e1d21/kube-rbac-proxy/0.log" Oct 06 08:39:05 crc kubenswrapper[4769]: I1006 08:39:05.723591 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7cb48dbc-bq7vc_02615b46-f027-4aef-b28e-8c4b1c7e1d21/manager/0.log" Oct 06 08:39:05 crc kubenswrapper[4769]: I1006 08:39:05.818245 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-d6c9dc5bc-9s2zz_7562bfed-f13e-4197-8169-adbed0092212/kube-rbac-proxy/0.log" Oct 06 08:39:05 crc kubenswrapper[4769]: I1006 08:39:05.909135 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-d6c9dc5bc-9s2zz_7562bfed-f13e-4197-8169-adbed0092212/manager/0.log" Oct 06 08:39:06 crc kubenswrapper[4769]: I1006 08:39:06.064499 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-69b956fbf6-8n98x_7e2d1e8d-04fd-4d79-a2b3-270e4f3384f8/kube-rbac-proxy/0.log" Oct 06 08:39:06 crc kubenswrapper[4769]: I1006 08:39:06.068283 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-69b956fbf6-8n98x_7e2d1e8d-04fd-4d79-a2b3-270e4f3384f8/manager/0.log" Oct 06 08:39:06 crc kubenswrapper[4769]: I1006 08:39:06.214029 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6c9b57c67-t6647_e140b4bd-2692-432a-8f4c-78f4462df844/kube-rbac-proxy/0.log" Oct 06 08:39:06 crc kubenswrapper[4769]: I1006 08:39:06.378654 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-6c9b57c67-t6647_e140b4bd-2692-432a-8f4c-78f4462df844/manager/0.log" Oct 06 08:39:06 crc kubenswrapper[4769]: I1006 08:39:06.390553 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f59f9d8-pvg66_049068f2-65f7-46d1-a0b1-d31fd9efb7b3/kube-rbac-proxy/0.log" Oct 06 08:39:06 crc kubenswrapper[4769]: I1006 08:39:06.431081 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f59f9d8-pvg66_049068f2-65f7-46d1-a0b1-d31fd9efb7b3/manager/0.log" Oct 06 08:39:07 crc kubenswrapper[4769]: I1006 08:39:07.107064 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8_03db1017-2d31-4f66-a1c6-a52297484016/manager/0.log" Oct 06 08:39:07 crc kubenswrapper[4769]: I1006 08:39:07.145388 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-66cc85b5d5lb8x8_03db1017-2d31-4f66-a1c6-a52297484016/kube-rbac-proxy/0.log" Oct 06 08:39:07 crc kubenswrapper[4769]: I1006 08:39:07.169331 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cfc658b9-ghwpn_a78bef32-e8ac-4828-9eac-ba7fa7a0d609/kube-rbac-proxy/0.log" Oct 06 08:39:07 crc kubenswrapper[4769]: I1006 08:39:07.370370 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-677d5bb784-lbb9s_fc67c917-8b2a-47b6-9232-f80ccd98c13d/kube-rbac-proxy/0.log" Oct 06 08:39:07 crc kubenswrapper[4769]: I1006 08:39:07.504698 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-677d5bb784-lbb9s_fc67c917-8b2a-47b6-9232-f80ccd98c13d/operator/0.log" Oct 06 08:39:07 crc kubenswrapper[4769]: I1006 08:39:07.708941 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-c968bb45-zttxz_80adcd74-7072-48aa-9f4e-e1896c5fe81c/kube-rbac-proxy/0.log" Oct 06 08:39:07 crc kubenswrapper[4769]: I1006 08:39:07.742604 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-fqtb5_118688b6-b9e2-4de4-8c39-6bd6c68d158e/registry-server/0.log" Oct 06 08:39:07 crc kubenswrapper[4769]: I1006 08:39:07.769568 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-c968bb45-zttxz_80adcd74-7072-48aa-9f4e-e1896c5fe81c/manager/0.log" Oct 06 08:39:07 crc kubenswrapper[4769]: I1006 08:39:07.956893 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-66f6d6849b-njlwf_b1c1ec64-da04-49e4-8eff-a9d020390333/manager/0.log" Oct 06 08:39:08 crc kubenswrapper[4769]: I1006 08:39:08.026823 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-4w6l6_603e7c24-39ad-4bd2-84ea-0e652c9a829c/operator/0.log" Oct 06 08:39:08 crc kubenswrapper[4769]: I1006 08:39:08.030868 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-66f6d6849b-njlwf_b1c1ec64-da04-49e4-8eff-a9d020390333/kube-rbac-proxy/0.log" Oct 06 08:39:08 crc kubenswrapper[4769]: I1006 08:39:08.152848 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cfc658b9-ghwpn_a78bef32-e8ac-4828-9eac-ba7fa7a0d609/manager/0.log" Oct 06 08:39:08 crc kubenswrapper[4769]: I1006 08:39:08.261604 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-76d5577b-pm2s4_a0f015e6-bbe2-4fba-ac6a-3dbb6a818ad3/manager/0.log" Oct 06 08:39:08 crc kubenswrapper[4769]: I1006 08:39:08.269833 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-76d5577b-pm2s4_a0f015e6-bbe2-4fba-ac6a-3dbb6a818ad3/kube-rbac-proxy/0.log" Oct 06 08:39:08 crc kubenswrapper[4769]: I1006 08:39:08.334324 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-f589c7597-dll8n_cbbe870b-89f2-4e48-989f-4491894e20e0/kube-rbac-proxy/0.log" Oct 06 08:39:08 crc kubenswrapper[4769]: I1006 08:39:08.469056 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-f589c7597-dll8n_cbbe870b-89f2-4e48-989f-4491894e20e0/manager/0.log" Oct 06 08:39:08 crc kubenswrapper[4769]: I1006 08:39:08.490484 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6bb6dcddc-c4ppt_e43df54a-23a1-4f04-947d-5d76ee8334e0/kube-rbac-proxy/0.log" Oct 06 08:39:08 crc kubenswrapper[4769]: I1006 08:39:08.533844 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-6bb6dcddc-c4ppt_e43df54a-23a1-4f04-947d-5d76ee8334e0/manager/0.log" Oct 06 08:39:08 crc kubenswrapper[4769]: I1006 08:39:08.679486 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5d98cc5575-fjmzq_d925692d-8bc4-4013-9e6e-98f43c9cf89d/kube-rbac-proxy/0.log" Oct 06 08:39:08 crc kubenswrapper[4769]: I1006 08:39:08.699775 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5d98cc5575-fjmzq_d925692d-8bc4-4013-9e6e-98f43c9cf89d/manager/0.log" Oct 06 08:39:22 crc kubenswrapper[4769]: I1006 08:39:22.245817 4769 patch_prober.go:28] interesting pod/machine-config-daemon-rlfqr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 06 08:39:22 crc kubenswrapper[4769]: I1006 08:39:22.246482 4769 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 06 08:39:22 crc kubenswrapper[4769]: I1006 08:39:22.246750 4769 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" Oct 06 08:39:22 crc kubenswrapper[4769]: I1006 08:39:22.247813 4769 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912"} pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 06 08:39:22 crc kubenswrapper[4769]: I1006 08:39:22.247891 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerName="machine-config-daemon" containerID="cri-o://c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" gracePeriod=600 Oct 06 08:39:22 crc kubenswrapper[4769]: E1006 08:39:22.385922 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:39:23 crc kubenswrapper[4769]: I1006 08:39:23.165184 4769 generic.go:334] "Generic (PLEG): container finished" podID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" exitCode=0 Oct 06 08:39:23 crc kubenswrapper[4769]: I1006 08:39:23.165268 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerDied","Data":"c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912"} Oct 06 08:39:23 crc kubenswrapper[4769]: I1006 08:39:23.165605 4769 scope.go:117] "RemoveContainer" containerID="d674122c44fab7feac37cd3295a437e54256b31dd0aaf5bd9ebe5b93ab3cd80a" Oct 06 08:39:23 crc kubenswrapper[4769]: I1006 08:39:23.166204 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:39:23 crc kubenswrapper[4769]: E1006 08:39:23.166584 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:39:26 crc kubenswrapper[4769]: I1006 08:39:26.116839 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bstmm_500255b4-789b-43dd-ba43-067682532ae9/control-plane-machine-set-operator/0.log" Oct 06 08:39:26 crc kubenswrapper[4769]: I1006 08:39:26.271095 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fgcjc_600730b8-8dad-4391-bcf2-9fe5a4ca21b8/kube-rbac-proxy/0.log" Oct 06 08:39:26 crc kubenswrapper[4769]: I1006 08:39:26.304544 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-fgcjc_600730b8-8dad-4391-bcf2-9fe5a4ca21b8/machine-api-operator/0.log" Oct 06 08:39:37 crc kubenswrapper[4769]: I1006 08:39:37.644463 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-l2jg6_259cbbd3-4950-4cbf-bcd0-ca39e1d77078/cert-manager-controller/0.log" Oct 06 08:39:37 crc kubenswrapper[4769]: I1006 08:39:37.789137 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-kp88x_ec86b22f-51e9-42f9-b4ce-357840aebe09/cert-manager-cainjector/0.log" Oct 06 08:39:37 crc kubenswrapper[4769]: I1006 08:39:37.840699 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-k52gj_6ac3e82d-c1b5-4384-87f3-7859e7f0c02a/cert-manager-webhook/0.log" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.106851 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s4lcj"] Oct 06 08:39:38 crc kubenswrapper[4769]: E1006 08:39:38.107288 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7700af75-4fe3-4a15-8bcb-79f9014d4489" containerName="container-00" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.107315 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="7700af75-4fe3-4a15-8bcb-79f9014d4489" containerName="container-00" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.107615 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="7700af75-4fe3-4a15-8bcb-79f9014d4489" containerName="container-00" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.109309 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.119815 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4lcj"] Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.166943 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:39:38 crc kubenswrapper[4769]: E1006 08:39:38.167435 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.212952 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xlxc\" (UniqueName: \"kubernetes.io/projected/37e24fec-528e-460f-b951-fa14d8be6905-kube-api-access-5xlxc\") pod \"redhat-marketplace-s4lcj\" (UID: \"37e24fec-528e-460f-b951-fa14d8be6905\") " pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.213020 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e24fec-528e-460f-b951-fa14d8be6905-catalog-content\") pod \"redhat-marketplace-s4lcj\" (UID: \"37e24fec-528e-460f-b951-fa14d8be6905\") " pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.213134 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e24fec-528e-460f-b951-fa14d8be6905-utilities\") pod \"redhat-marketplace-s4lcj\" (UID: \"37e24fec-528e-460f-b951-fa14d8be6905\") " pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.316215 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xlxc\" (UniqueName: \"kubernetes.io/projected/37e24fec-528e-460f-b951-fa14d8be6905-kube-api-access-5xlxc\") pod \"redhat-marketplace-s4lcj\" (UID: \"37e24fec-528e-460f-b951-fa14d8be6905\") " pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.316311 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e24fec-528e-460f-b951-fa14d8be6905-catalog-content\") pod \"redhat-marketplace-s4lcj\" (UID: \"37e24fec-528e-460f-b951-fa14d8be6905\") " pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.316433 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e24fec-528e-460f-b951-fa14d8be6905-utilities\") pod \"redhat-marketplace-s4lcj\" (UID: \"37e24fec-528e-460f-b951-fa14d8be6905\") " pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.317968 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e24fec-528e-460f-b951-fa14d8be6905-utilities\") pod \"redhat-marketplace-s4lcj\" (UID: \"37e24fec-528e-460f-b951-fa14d8be6905\") " pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.318025 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e24fec-528e-460f-b951-fa14d8be6905-catalog-content\") pod \"redhat-marketplace-s4lcj\" (UID: \"37e24fec-528e-460f-b951-fa14d8be6905\") " pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.337998 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xlxc\" (UniqueName: \"kubernetes.io/projected/37e24fec-528e-460f-b951-fa14d8be6905-kube-api-access-5xlxc\") pod \"redhat-marketplace-s4lcj\" (UID: \"37e24fec-528e-460f-b951-fa14d8be6905\") " pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.445376 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:38 crc kubenswrapper[4769]: I1006 08:39:38.990024 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4lcj"] Oct 06 08:39:39 crc kubenswrapper[4769]: I1006 08:39:39.306684 4769 generic.go:334] "Generic (PLEG): container finished" podID="37e24fec-528e-460f-b951-fa14d8be6905" containerID="25702f6c41b103fad1b3e742c89f39e25876585e02ce83a51032350a4110c111" exitCode=0 Oct 06 08:39:39 crc kubenswrapper[4769]: I1006 08:39:39.306727 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4lcj" event={"ID":"37e24fec-528e-460f-b951-fa14d8be6905","Type":"ContainerDied","Data":"25702f6c41b103fad1b3e742c89f39e25876585e02ce83a51032350a4110c111"} Oct 06 08:39:39 crc kubenswrapper[4769]: I1006 08:39:39.306753 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4lcj" event={"ID":"37e24fec-528e-460f-b951-fa14d8be6905","Type":"ContainerStarted","Data":"84cdd598f7dfdfa053d16bf25c1b92d816409ec212333d949b7a4c64644919f3"} Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.324629 4769 generic.go:334] "Generic (PLEG): container finished" podID="37e24fec-528e-460f-b951-fa14d8be6905" containerID="29dc027be6ba3855acf5e178779154c68dbf969e7770760264f085daae152f5d" exitCode=0 Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.325377 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4lcj" event={"ID":"37e24fec-528e-460f-b951-fa14d8be6905","Type":"ContainerDied","Data":"29dc027be6ba3855acf5e178779154c68dbf969e7770760264f085daae152f5d"} Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.486476 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2sq5k"] Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.488395 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.501062 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2sq5k"] Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.621594 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85r6j\" (UniqueName: \"kubernetes.io/projected/c703ebe9-c23b-4f80-8718-f2d786295048-kube-api-access-85r6j\") pod \"certified-operators-2sq5k\" (UID: \"c703ebe9-c23b-4f80-8718-f2d786295048\") " pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.621658 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c703ebe9-c23b-4f80-8718-f2d786295048-catalog-content\") pod \"certified-operators-2sq5k\" (UID: \"c703ebe9-c23b-4f80-8718-f2d786295048\") " pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.621838 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c703ebe9-c23b-4f80-8718-f2d786295048-utilities\") pod \"certified-operators-2sq5k\" (UID: \"c703ebe9-c23b-4f80-8718-f2d786295048\") " pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.723175 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85r6j\" (UniqueName: \"kubernetes.io/projected/c703ebe9-c23b-4f80-8718-f2d786295048-kube-api-access-85r6j\") pod \"certified-operators-2sq5k\" (UID: \"c703ebe9-c23b-4f80-8718-f2d786295048\") " pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.723249 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c703ebe9-c23b-4f80-8718-f2d786295048-catalog-content\") pod \"certified-operators-2sq5k\" (UID: \"c703ebe9-c23b-4f80-8718-f2d786295048\") " pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.723351 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c703ebe9-c23b-4f80-8718-f2d786295048-utilities\") pod \"certified-operators-2sq5k\" (UID: \"c703ebe9-c23b-4f80-8718-f2d786295048\") " pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.723807 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c703ebe9-c23b-4f80-8718-f2d786295048-utilities\") pod \"certified-operators-2sq5k\" (UID: \"c703ebe9-c23b-4f80-8718-f2d786295048\") " pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.724258 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c703ebe9-c23b-4f80-8718-f2d786295048-catalog-content\") pod \"certified-operators-2sq5k\" (UID: \"c703ebe9-c23b-4f80-8718-f2d786295048\") " pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.750742 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85r6j\" (UniqueName: \"kubernetes.io/projected/c703ebe9-c23b-4f80-8718-f2d786295048-kube-api-access-85r6j\") pod \"certified-operators-2sq5k\" (UID: \"c703ebe9-c23b-4f80-8718-f2d786295048\") " pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:41 crc kubenswrapper[4769]: I1006 08:39:41.807869 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:42 crc kubenswrapper[4769]: I1006 08:39:42.128661 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2sq5k"] Oct 06 08:39:42 crc kubenswrapper[4769]: I1006 08:39:42.334568 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sq5k" event={"ID":"c703ebe9-c23b-4f80-8718-f2d786295048","Type":"ContainerStarted","Data":"dc5432299bc586785aecf9f0a7b83e4b81435ee83852654e8db49ab0af70e9cd"} Oct 06 08:39:42 crc kubenswrapper[4769]: I1006 08:39:42.337630 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4lcj" event={"ID":"37e24fec-528e-460f-b951-fa14d8be6905","Type":"ContainerStarted","Data":"37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f"} Oct 06 08:39:42 crc kubenswrapper[4769]: I1006 08:39:42.362597 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s4lcj" podStartSLOduration=1.9521623799999999 podStartE2EDuration="4.362582179s" podCreationTimestamp="2025-10-06 08:39:38 +0000 UTC" firstStartedPulling="2025-10-06 08:39:39.30863755 +0000 UTC m=+4975.832918697" lastFinishedPulling="2025-10-06 08:39:41.719057349 +0000 UTC m=+4978.243338496" observedRunningTime="2025-10-06 08:39:42.360052969 +0000 UTC m=+4978.884334116" watchObservedRunningTime="2025-10-06 08:39:42.362582179 +0000 UTC m=+4978.886863326" Oct 06 08:39:43 crc kubenswrapper[4769]: I1006 08:39:43.348407 4769 generic.go:334] "Generic (PLEG): container finished" podID="c703ebe9-c23b-4f80-8718-f2d786295048" containerID="5a7c78eb3fcc8b2622350634b075a7e488c8bce3e6139581c23375360b8bb394" exitCode=0 Oct 06 08:39:43 crc kubenswrapper[4769]: I1006 08:39:43.348611 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sq5k" event={"ID":"c703ebe9-c23b-4f80-8718-f2d786295048","Type":"ContainerDied","Data":"5a7c78eb3fcc8b2622350634b075a7e488c8bce3e6139581c23375360b8bb394"} Oct 06 08:39:44 crc kubenswrapper[4769]: I1006 08:39:44.358599 4769 generic.go:334] "Generic (PLEG): container finished" podID="c703ebe9-c23b-4f80-8718-f2d786295048" containerID="725bce0d1736f67496a50cb671d8cd3970f62b4167a6ff1f5a86a6ca334b1dfb" exitCode=0 Oct 06 08:39:44 crc kubenswrapper[4769]: I1006 08:39:44.358679 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sq5k" event={"ID":"c703ebe9-c23b-4f80-8718-f2d786295048","Type":"ContainerDied","Data":"725bce0d1736f67496a50cb671d8cd3970f62b4167a6ff1f5a86a6ca334b1dfb"} Oct 06 08:39:45 crc kubenswrapper[4769]: I1006 08:39:45.374400 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sq5k" event={"ID":"c703ebe9-c23b-4f80-8718-f2d786295048","Type":"ContainerStarted","Data":"c5667b2d035cd5eba1d6b3e3c6a7acc47874ad313698ab17d8658d2976e2e9be"} Oct 06 08:39:48 crc kubenswrapper[4769]: I1006 08:39:48.445894 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:48 crc kubenswrapper[4769]: I1006 08:39:48.446496 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:48 crc kubenswrapper[4769]: I1006 08:39:48.494355 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:48 crc kubenswrapper[4769]: I1006 08:39:48.513296 4769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2sq5k" podStartSLOduration=6.099843073 podStartE2EDuration="7.513279909s" podCreationTimestamp="2025-10-06 08:39:41 +0000 UTC" firstStartedPulling="2025-10-06 08:39:43.350679367 +0000 UTC m=+4979.874960514" lastFinishedPulling="2025-10-06 08:39:44.764116203 +0000 UTC m=+4981.288397350" observedRunningTime="2025-10-06 08:39:45.395006795 +0000 UTC m=+4981.919287952" watchObservedRunningTime="2025-10-06 08:39:48.513279909 +0000 UTC m=+4985.037561056" Oct 06 08:39:49 crc kubenswrapper[4769]: I1006 08:39:49.166704 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:39:49 crc kubenswrapper[4769]: E1006 08:39:49.167231 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:39:49 crc kubenswrapper[4769]: I1006 08:39:49.449851 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:49 crc kubenswrapper[4769]: I1006 08:39:49.494269 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4lcj"] Oct 06 08:39:50 crc kubenswrapper[4769]: I1006 08:39:50.853117 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-p7rg7_802ef315-9d05-4501-ac19-e994a822fec7/nmstate-console-plugin/0.log" Oct 06 08:39:51 crc kubenswrapper[4769]: I1006 08:39:51.017921 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-ddkqb_358b3c8a-e981-4525-af7a-bbf05421b9fa/nmstate-handler/0.log" Oct 06 08:39:51 crc kubenswrapper[4769]: I1006 08:39:51.074522 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-r8v6q_6ab7169e-e46e-4e3b-a21a-7bd332467bb6/kube-rbac-proxy/0.log" Oct 06 08:39:51 crc kubenswrapper[4769]: I1006 08:39:51.102123 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-r8v6q_6ab7169e-e46e-4e3b-a21a-7bd332467bb6/nmstate-metrics/0.log" Oct 06 08:39:51 crc kubenswrapper[4769]: I1006 08:39:51.247221 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-zdtf9_a25249c6-ee28-4d96-a2c7-077e0a0bb198/nmstate-operator/0.log" Oct 06 08:39:51 crc kubenswrapper[4769]: I1006 08:39:51.284273 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-brg5f_dd69fd45-0ee0-4358-8f30-39e251e3a27f/nmstate-webhook/0.log" Oct 06 08:39:51 crc kubenswrapper[4769]: I1006 08:39:51.430923 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s4lcj" podUID="37e24fec-528e-460f-b951-fa14d8be6905" containerName="registry-server" containerID="cri-o://37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f" gracePeriod=2 Oct 06 08:39:51 crc kubenswrapper[4769]: I1006 08:39:51.811321 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:51 crc kubenswrapper[4769]: I1006 08:39:51.811696 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:51 crc kubenswrapper[4769]: I1006 08:39:51.867329 4769 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:51 crc kubenswrapper[4769]: I1006 08:39:51.906775 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.025476 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e24fec-528e-460f-b951-fa14d8be6905-catalog-content\") pod \"37e24fec-528e-460f-b951-fa14d8be6905\" (UID: \"37e24fec-528e-460f-b951-fa14d8be6905\") " Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.025554 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xlxc\" (UniqueName: \"kubernetes.io/projected/37e24fec-528e-460f-b951-fa14d8be6905-kube-api-access-5xlxc\") pod \"37e24fec-528e-460f-b951-fa14d8be6905\" (UID: \"37e24fec-528e-460f-b951-fa14d8be6905\") " Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.025631 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e24fec-528e-460f-b951-fa14d8be6905-utilities\") pod \"37e24fec-528e-460f-b951-fa14d8be6905\" (UID: \"37e24fec-528e-460f-b951-fa14d8be6905\") " Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.026750 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37e24fec-528e-460f-b951-fa14d8be6905-utilities" (OuterVolumeSpecName: "utilities") pod "37e24fec-528e-460f-b951-fa14d8be6905" (UID: "37e24fec-528e-460f-b951-fa14d8be6905"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.040657 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37e24fec-528e-460f-b951-fa14d8be6905-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37e24fec-528e-460f-b951-fa14d8be6905" (UID: "37e24fec-528e-460f-b951-fa14d8be6905"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.050687 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37e24fec-528e-460f-b951-fa14d8be6905-kube-api-access-5xlxc" (OuterVolumeSpecName: "kube-api-access-5xlxc") pod "37e24fec-528e-460f-b951-fa14d8be6905" (UID: "37e24fec-528e-460f-b951-fa14d8be6905"). InnerVolumeSpecName "kube-api-access-5xlxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.127096 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37e24fec-528e-460f-b951-fa14d8be6905-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.127132 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xlxc\" (UniqueName: \"kubernetes.io/projected/37e24fec-528e-460f-b951-fa14d8be6905-kube-api-access-5xlxc\") on node \"crc\" DevicePath \"\"" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.127145 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37e24fec-528e-460f-b951-fa14d8be6905-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.439924 4769 generic.go:334] "Generic (PLEG): container finished" podID="37e24fec-528e-460f-b951-fa14d8be6905" containerID="37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f" exitCode=0 Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.439987 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s4lcj" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.440008 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4lcj" event={"ID":"37e24fec-528e-460f-b951-fa14d8be6905","Type":"ContainerDied","Data":"37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f"} Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.440399 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s4lcj" event={"ID":"37e24fec-528e-460f-b951-fa14d8be6905","Type":"ContainerDied","Data":"84cdd598f7dfdfa053d16bf25c1b92d816409ec212333d949b7a4c64644919f3"} Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.440456 4769 scope.go:117] "RemoveContainer" containerID="37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.467617 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4lcj"] Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.471625 4769 scope.go:117] "RemoveContainer" containerID="29dc027be6ba3855acf5e178779154c68dbf969e7770760264f085daae152f5d" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.478000 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s4lcj"] Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.499923 4769 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.509479 4769 scope.go:117] "RemoveContainer" containerID="25702f6c41b103fad1b3e742c89f39e25876585e02ce83a51032350a4110c111" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.541366 4769 scope.go:117] "RemoveContainer" containerID="37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f" Oct 06 08:39:52 crc kubenswrapper[4769]: E1006 08:39:52.541861 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f\": container with ID starting with 37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f not found: ID does not exist" containerID="37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.541927 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f"} err="failed to get container status \"37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f\": rpc error: code = NotFound desc = could not find container \"37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f\": container with ID starting with 37fd906f26c80e80df30a8dfc3c2b5dc49ba3d444029e81fa94ab229e46ca19f not found: ID does not exist" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.541954 4769 scope.go:117] "RemoveContainer" containerID="29dc027be6ba3855acf5e178779154c68dbf969e7770760264f085daae152f5d" Oct 06 08:39:52 crc kubenswrapper[4769]: E1006 08:39:52.542405 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29dc027be6ba3855acf5e178779154c68dbf969e7770760264f085daae152f5d\": container with ID starting with 29dc027be6ba3855acf5e178779154c68dbf969e7770760264f085daae152f5d not found: ID does not exist" containerID="29dc027be6ba3855acf5e178779154c68dbf969e7770760264f085daae152f5d" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.542480 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29dc027be6ba3855acf5e178779154c68dbf969e7770760264f085daae152f5d"} err="failed to get container status \"29dc027be6ba3855acf5e178779154c68dbf969e7770760264f085daae152f5d\": rpc error: code = NotFound desc = could not find container \"29dc027be6ba3855acf5e178779154c68dbf969e7770760264f085daae152f5d\": container with ID starting with 29dc027be6ba3855acf5e178779154c68dbf969e7770760264f085daae152f5d not found: ID does not exist" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.542522 4769 scope.go:117] "RemoveContainer" containerID="25702f6c41b103fad1b3e742c89f39e25876585e02ce83a51032350a4110c111" Oct 06 08:39:52 crc kubenswrapper[4769]: E1006 08:39:52.542876 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25702f6c41b103fad1b3e742c89f39e25876585e02ce83a51032350a4110c111\": container with ID starting with 25702f6c41b103fad1b3e742c89f39e25876585e02ce83a51032350a4110c111 not found: ID does not exist" containerID="25702f6c41b103fad1b3e742c89f39e25876585e02ce83a51032350a4110c111" Oct 06 08:39:52 crc kubenswrapper[4769]: I1006 08:39:52.542920 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25702f6c41b103fad1b3e742c89f39e25876585e02ce83a51032350a4110c111"} err="failed to get container status \"25702f6c41b103fad1b3e742c89f39e25876585e02ce83a51032350a4110c111\": rpc error: code = NotFound desc = could not find container \"25702f6c41b103fad1b3e742c89f39e25876585e02ce83a51032350a4110c111\": container with ID starting with 25702f6c41b103fad1b3e742c89f39e25876585e02ce83a51032350a4110c111 not found: ID does not exist" Oct 06 08:39:54 crc kubenswrapper[4769]: I1006 08:39:54.178397 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37e24fec-528e-460f-b951-fa14d8be6905" path="/var/lib/kubelet/pods/37e24fec-528e-460f-b951-fa14d8be6905/volumes" Oct 06 08:39:54 crc kubenswrapper[4769]: I1006 08:39:54.724731 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2sq5k"] Oct 06 08:39:54 crc kubenswrapper[4769]: I1006 08:39:54.724998 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2sq5k" podUID="c703ebe9-c23b-4f80-8718-f2d786295048" containerName="registry-server" containerID="cri-o://c5667b2d035cd5eba1d6b3e3c6a7acc47874ad313698ab17d8658d2976e2e9be" gracePeriod=2 Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.466675 4769 generic.go:334] "Generic (PLEG): container finished" podID="c703ebe9-c23b-4f80-8718-f2d786295048" containerID="c5667b2d035cd5eba1d6b3e3c6a7acc47874ad313698ab17d8658d2976e2e9be" exitCode=0 Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.466833 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sq5k" event={"ID":"c703ebe9-c23b-4f80-8718-f2d786295048","Type":"ContainerDied","Data":"c5667b2d035cd5eba1d6b3e3c6a7acc47874ad313698ab17d8658d2976e2e9be"} Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.467257 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sq5k" event={"ID":"c703ebe9-c23b-4f80-8718-f2d786295048","Type":"ContainerDied","Data":"dc5432299bc586785aecf9f0a7b83e4b81435ee83852654e8db49ab0af70e9cd"} Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.467277 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc5432299bc586785aecf9f0a7b83e4b81435ee83852654e8db49ab0af70e9cd" Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.557555 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.716503 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85r6j\" (UniqueName: \"kubernetes.io/projected/c703ebe9-c23b-4f80-8718-f2d786295048-kube-api-access-85r6j\") pod \"c703ebe9-c23b-4f80-8718-f2d786295048\" (UID: \"c703ebe9-c23b-4f80-8718-f2d786295048\") " Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.716580 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c703ebe9-c23b-4f80-8718-f2d786295048-utilities\") pod \"c703ebe9-c23b-4f80-8718-f2d786295048\" (UID: \"c703ebe9-c23b-4f80-8718-f2d786295048\") " Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.716804 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c703ebe9-c23b-4f80-8718-f2d786295048-catalog-content\") pod \"c703ebe9-c23b-4f80-8718-f2d786295048\" (UID: \"c703ebe9-c23b-4f80-8718-f2d786295048\") " Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.717604 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c703ebe9-c23b-4f80-8718-f2d786295048-utilities" (OuterVolumeSpecName: "utilities") pod "c703ebe9-c23b-4f80-8718-f2d786295048" (UID: "c703ebe9-c23b-4f80-8718-f2d786295048"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.720484 4769 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c703ebe9-c23b-4f80-8718-f2d786295048-utilities\") on node \"crc\" DevicePath \"\"" Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.761738 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c703ebe9-c23b-4f80-8718-f2d786295048-kube-api-access-85r6j" (OuterVolumeSpecName: "kube-api-access-85r6j") pod "c703ebe9-c23b-4f80-8718-f2d786295048" (UID: "c703ebe9-c23b-4f80-8718-f2d786295048"). InnerVolumeSpecName "kube-api-access-85r6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.771657 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c703ebe9-c23b-4f80-8718-f2d786295048-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c703ebe9-c23b-4f80-8718-f2d786295048" (UID: "c703ebe9-c23b-4f80-8718-f2d786295048"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.822321 4769 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c703ebe9-c23b-4f80-8718-f2d786295048-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 06 08:39:55 crc kubenswrapper[4769]: I1006 08:39:55.822366 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85r6j\" (UniqueName: \"kubernetes.io/projected/c703ebe9-c23b-4f80-8718-f2d786295048-kube-api-access-85r6j\") on node \"crc\" DevicePath \"\"" Oct 06 08:39:56 crc kubenswrapper[4769]: I1006 08:39:56.474916 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sq5k" Oct 06 08:39:56 crc kubenswrapper[4769]: I1006 08:39:56.493595 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2sq5k"] Oct 06 08:39:56 crc kubenswrapper[4769]: I1006 08:39:56.501267 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2sq5k"] Oct 06 08:39:58 crc kubenswrapper[4769]: I1006 08:39:58.183076 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c703ebe9-c23b-4f80-8718-f2d786295048" path="/var/lib/kubelet/pods/c703ebe9-c23b-4f80-8718-f2d786295048/volumes" Oct 06 08:40:00 crc kubenswrapper[4769]: I1006 08:40:00.167738 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:40:00 crc kubenswrapper[4769]: E1006 08:40:00.168520 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:40:06 crc kubenswrapper[4769]: I1006 08:40:06.889710 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-9nkz5_7cfbeaee-4edb-49a6-a887-520cd4922ca1/kube-rbac-proxy/0.log" Oct 06 08:40:07 crc kubenswrapper[4769]: I1006 08:40:07.059907 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-9nkz5_7cfbeaee-4edb-49a6-a887-520cd4922ca1/controller/0.log" Oct 06 08:40:07 crc kubenswrapper[4769]: I1006 08:40:07.137499 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/cp-frr-files/0.log" Oct 06 08:40:07 crc kubenswrapper[4769]: I1006 08:40:07.704938 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/cp-frr-files/0.log" Oct 06 08:40:07 crc kubenswrapper[4769]: I1006 08:40:07.755257 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/cp-reloader/0.log" Oct 06 08:40:07 crc kubenswrapper[4769]: I1006 08:40:07.807263 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/cp-metrics/0.log" Oct 06 08:40:07 crc kubenswrapper[4769]: I1006 08:40:07.807287 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/cp-reloader/0.log" Oct 06 08:40:07 crc kubenswrapper[4769]: I1006 08:40:07.948470 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/cp-frr-files/0.log" Oct 06 08:40:07 crc kubenswrapper[4769]: I1006 08:40:07.951344 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/cp-reloader/0.log" Oct 06 08:40:07 crc kubenswrapper[4769]: I1006 08:40:07.992334 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/cp-metrics/0.log" Oct 06 08:40:08 crc kubenswrapper[4769]: I1006 08:40:08.034540 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/cp-metrics/0.log" Oct 06 08:40:08 crc kubenswrapper[4769]: I1006 08:40:08.199809 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/cp-reloader/0.log" Oct 06 08:40:08 crc kubenswrapper[4769]: I1006 08:40:08.209954 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/cp-metrics/0.log" Oct 06 08:40:08 crc kubenswrapper[4769]: I1006 08:40:08.223688 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/cp-frr-files/0.log" Oct 06 08:40:08 crc kubenswrapper[4769]: I1006 08:40:08.270666 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/controller/0.log" Oct 06 08:40:08 crc kubenswrapper[4769]: I1006 08:40:08.431567 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/frr-metrics/0.log" Oct 06 08:40:08 crc kubenswrapper[4769]: I1006 08:40:08.501824 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/kube-rbac-proxy-frr/0.log" Oct 06 08:40:08 crc kubenswrapper[4769]: I1006 08:40:08.509257 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/kube-rbac-proxy/0.log" Oct 06 08:40:08 crc kubenswrapper[4769]: I1006 08:40:08.642036 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/reloader/0.log" Oct 06 08:40:08 crc kubenswrapper[4769]: I1006 08:40:08.732065 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-dzl96_9bf51374-74ce-4d2d-a19b-6fcc29e09d29/frr-k8s-webhook-server/0.log" Oct 06 08:40:09 crc kubenswrapper[4769]: I1006 08:40:09.095658 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-778bd9b8cd-kv2fg_45aedf89-f0a0-4a64-839e-5dd2676b71ae/manager/0.log" Oct 06 08:40:09 crc kubenswrapper[4769]: I1006 08:40:09.201030 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b6689b594-655tz_9f122875-038b-4bae-ad57-07c899fc54ad/webhook-server/0.log" Oct 06 08:40:09 crc kubenswrapper[4769]: I1006 08:40:09.341688 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tqlvs_d88f6701-fae7-4e41-b349-99fce99be6da/kube-rbac-proxy/0.log" Oct 06 08:40:09 crc kubenswrapper[4769]: I1006 08:40:09.983585 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5mfsm_c9cec005-4cae-4484-bfe3-03bed62e27b8/frr/0.log" Oct 06 08:40:09 crc kubenswrapper[4769]: I1006 08:40:09.994155 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tqlvs_d88f6701-fae7-4e41-b349-99fce99be6da/speaker/0.log" Oct 06 08:40:15 crc kubenswrapper[4769]: I1006 08:40:15.166053 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:40:15 crc kubenswrapper[4769]: E1006 08:40:15.166825 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:40:22 crc kubenswrapper[4769]: I1006 08:40:22.440818 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6_38973f50-be18-4cb1-a8bd-cb5d2eb5b22c/util/0.log" Oct 06 08:40:22 crc kubenswrapper[4769]: I1006 08:40:22.564971 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6_38973f50-be18-4cb1-a8bd-cb5d2eb5b22c/util/0.log" Oct 06 08:40:22 crc kubenswrapper[4769]: I1006 08:40:22.595112 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6_38973f50-be18-4cb1-a8bd-cb5d2eb5b22c/pull/0.log" Oct 06 08:40:22 crc kubenswrapper[4769]: I1006 08:40:22.747258 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6_38973f50-be18-4cb1-a8bd-cb5d2eb5b22c/pull/0.log" Oct 06 08:40:22 crc kubenswrapper[4769]: I1006 08:40:22.899385 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6_38973f50-be18-4cb1-a8bd-cb5d2eb5b22c/extract/0.log" Oct 06 08:40:22 crc kubenswrapper[4769]: I1006 08:40:22.927189 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6_38973f50-be18-4cb1-a8bd-cb5d2eb5b22c/util/0.log" Oct 06 08:40:22 crc kubenswrapper[4769]: I1006 08:40:22.946170 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2zcxl6_38973f50-be18-4cb1-a8bd-cb5d2eb5b22c/pull/0.log" Oct 06 08:40:23 crc kubenswrapper[4769]: I1006 08:40:23.168744 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-glqgj_9cfd3cb7-51a3-4926-ad98-533e3285dea9/extract-utilities/0.log" Oct 06 08:40:23 crc kubenswrapper[4769]: I1006 08:40:23.262934 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-glqgj_9cfd3cb7-51a3-4926-ad98-533e3285dea9/extract-utilities/0.log" Oct 06 08:40:23 crc kubenswrapper[4769]: I1006 08:40:23.299514 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-glqgj_9cfd3cb7-51a3-4926-ad98-533e3285dea9/extract-content/0.log" Oct 06 08:40:23 crc kubenswrapper[4769]: I1006 08:40:23.327926 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-glqgj_9cfd3cb7-51a3-4926-ad98-533e3285dea9/extract-content/0.log" Oct 06 08:40:23 crc kubenswrapper[4769]: I1006 08:40:23.515158 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-glqgj_9cfd3cb7-51a3-4926-ad98-533e3285dea9/extract-utilities/0.log" Oct 06 08:40:23 crc kubenswrapper[4769]: I1006 08:40:23.516893 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-glqgj_9cfd3cb7-51a3-4926-ad98-533e3285dea9/extract-content/0.log" Oct 06 08:40:23 crc kubenswrapper[4769]: I1006 08:40:23.703828 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c7kx_6575a5fc-632c-4f5b-927e-aa949b8fecbc/extract-utilities/0.log" Oct 06 08:40:23 crc kubenswrapper[4769]: I1006 08:40:23.989346 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c7kx_6575a5fc-632c-4f5b-927e-aa949b8fecbc/extract-content/0.log" Oct 06 08:40:24 crc kubenswrapper[4769]: I1006 08:40:24.047204 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c7kx_6575a5fc-632c-4f5b-927e-aa949b8fecbc/extract-utilities/0.log" Oct 06 08:40:24 crc kubenswrapper[4769]: I1006 08:40:24.099752 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c7kx_6575a5fc-632c-4f5b-927e-aa949b8fecbc/extract-content/0.log" Oct 06 08:40:24 crc kubenswrapper[4769]: I1006 08:40:24.258043 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-glqgj_9cfd3cb7-51a3-4926-ad98-533e3285dea9/registry-server/0.log" Oct 06 08:40:24 crc kubenswrapper[4769]: I1006 08:40:24.300031 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c7kx_6575a5fc-632c-4f5b-927e-aa949b8fecbc/extract-utilities/0.log" Oct 06 08:40:24 crc kubenswrapper[4769]: I1006 08:40:24.361275 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c7kx_6575a5fc-632c-4f5b-927e-aa949b8fecbc/extract-content/0.log" Oct 06 08:40:24 crc kubenswrapper[4769]: I1006 08:40:24.559056 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c_4396dd0e-d481-430a-a35a-73278b5e925f/util/0.log" Oct 06 08:40:25 crc kubenswrapper[4769]: I1006 08:40:25.040148 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5c7kx_6575a5fc-632c-4f5b-927e-aa949b8fecbc/registry-server/0.log" Oct 06 08:40:25 crc kubenswrapper[4769]: I1006 08:40:25.209071 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c_4396dd0e-d481-430a-a35a-73278b5e925f/pull/0.log" Oct 06 08:40:25 crc kubenswrapper[4769]: I1006 08:40:25.209325 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c_4396dd0e-d481-430a-a35a-73278b5e925f/pull/0.log" Oct 06 08:40:25 crc kubenswrapper[4769]: I1006 08:40:25.243283 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c_4396dd0e-d481-430a-a35a-73278b5e925f/util/0.log" Oct 06 08:40:25 crc kubenswrapper[4769]: I1006 08:40:25.396000 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c_4396dd0e-d481-430a-a35a-73278b5e925f/util/0.log" Oct 06 08:40:25 crc kubenswrapper[4769]: I1006 08:40:25.415243 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c_4396dd0e-d481-430a-a35a-73278b5e925f/extract/0.log" Oct 06 08:40:25 crc kubenswrapper[4769]: I1006 08:40:25.442151 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbml8c_4396dd0e-d481-430a-a35a-73278b5e925f/pull/0.log" Oct 06 08:40:25 crc kubenswrapper[4769]: I1006 08:40:25.623695 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-dccdw_42a92800-c31c-405a-acd7-5c33fcb1aa05/marketplace-operator/0.log" Oct 06 08:40:25 crc kubenswrapper[4769]: I1006 08:40:25.753933 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vr9gm_a0efcfec-1046-4b55-8fed-8f271f6a9d99/extract-utilities/0.log" Oct 06 08:40:25 crc kubenswrapper[4769]: I1006 08:40:25.987278 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vr9gm_a0efcfec-1046-4b55-8fed-8f271f6a9d99/extract-content/0.log" Oct 06 08:40:25 crc kubenswrapper[4769]: I1006 08:40:25.993260 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vr9gm_a0efcfec-1046-4b55-8fed-8f271f6a9d99/extract-utilities/0.log" Oct 06 08:40:26 crc kubenswrapper[4769]: I1006 08:40:26.005875 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vr9gm_a0efcfec-1046-4b55-8fed-8f271f6a9d99/extract-content/0.log" Oct 06 08:40:26 crc kubenswrapper[4769]: I1006 08:40:26.522366 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vr9gm_a0efcfec-1046-4b55-8fed-8f271f6a9d99/extract-content/0.log" Oct 06 08:40:26 crc kubenswrapper[4769]: I1006 08:40:26.545950 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vr9gm_a0efcfec-1046-4b55-8fed-8f271f6a9d99/extract-utilities/0.log" Oct 06 08:40:26 crc kubenswrapper[4769]: I1006 08:40:26.738526 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vr9gm_a0efcfec-1046-4b55-8fed-8f271f6a9d99/registry-server/0.log" Oct 06 08:40:26 crc kubenswrapper[4769]: I1006 08:40:26.759707 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdvtg_7428fd04-528c-4403-8f36-227127f6ee19/extract-utilities/0.log" Oct 06 08:40:26 crc kubenswrapper[4769]: I1006 08:40:26.904769 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdvtg_7428fd04-528c-4403-8f36-227127f6ee19/extract-content/0.log" Oct 06 08:40:26 crc kubenswrapper[4769]: I1006 08:40:26.912077 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdvtg_7428fd04-528c-4403-8f36-227127f6ee19/extract-utilities/0.log" Oct 06 08:40:26 crc kubenswrapper[4769]: I1006 08:40:26.950316 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdvtg_7428fd04-528c-4403-8f36-227127f6ee19/extract-content/0.log" Oct 06 08:40:27 crc kubenswrapper[4769]: I1006 08:40:27.119100 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdvtg_7428fd04-528c-4403-8f36-227127f6ee19/extract-utilities/0.log" Oct 06 08:40:27 crc kubenswrapper[4769]: I1006 08:40:27.142352 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdvtg_7428fd04-528c-4403-8f36-227127f6ee19/extract-content/0.log" Oct 06 08:40:27 crc kubenswrapper[4769]: I1006 08:40:27.858450 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-mdvtg_7428fd04-528c-4403-8f36-227127f6ee19/registry-server/0.log" Oct 06 08:40:29 crc kubenswrapper[4769]: I1006 08:40:29.165536 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:40:29 crc kubenswrapper[4769]: E1006 08:40:29.166033 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:40:44 crc kubenswrapper[4769]: I1006 08:40:44.178737 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:40:44 crc kubenswrapper[4769]: E1006 08:40:44.179533 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:40:57 crc kubenswrapper[4769]: I1006 08:40:57.166546 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:40:57 crc kubenswrapper[4769]: E1006 08:40:57.167246 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:41:08 crc kubenswrapper[4769]: I1006 08:41:08.167568 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:41:08 crc kubenswrapper[4769]: E1006 08:41:08.168746 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:41:19 crc kubenswrapper[4769]: I1006 08:41:19.165610 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:41:19 crc kubenswrapper[4769]: E1006 08:41:19.167285 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:41:31 crc kubenswrapper[4769]: I1006 08:41:31.165533 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:41:31 crc kubenswrapper[4769]: E1006 08:41:31.166435 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:41:46 crc kubenswrapper[4769]: I1006 08:41:46.166485 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:41:46 crc kubenswrapper[4769]: E1006 08:41:46.167591 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:41:58 crc kubenswrapper[4769]: I1006 08:41:58.170898 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:41:58 crc kubenswrapper[4769]: E1006 08:41:58.171465 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:42:10 crc kubenswrapper[4769]: I1006 08:42:10.166222 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:42:10 crc kubenswrapper[4769]: E1006 08:42:10.167082 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:42:17 crc kubenswrapper[4769]: I1006 08:42:17.731206 4769 generic.go:334] "Generic (PLEG): container finished" podID="bcda4cb1-625c-403f-b766-b03eb41ee198" containerID="200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794" exitCode=0 Oct 06 08:42:17 crc kubenswrapper[4769]: I1006 08:42:17.731331 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fdzrg/must-gather-km72z" event={"ID":"bcda4cb1-625c-403f-b766-b03eb41ee198","Type":"ContainerDied","Data":"200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794"} Oct 06 08:42:17 crc kubenswrapper[4769]: I1006 08:42:17.732277 4769 scope.go:117] "RemoveContainer" containerID="200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794" Oct 06 08:42:17 crc kubenswrapper[4769]: I1006 08:42:17.795552 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fdzrg_must-gather-km72z_bcda4cb1-625c-403f-b766-b03eb41ee198/gather/0.log" Oct 06 08:42:22 crc kubenswrapper[4769]: I1006 08:42:22.166330 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:42:22 crc kubenswrapper[4769]: E1006 08:42:22.167190 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:42:25 crc kubenswrapper[4769]: I1006 08:42:25.883087 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-fdzrg/must-gather-km72z"] Oct 06 08:42:25 crc kubenswrapper[4769]: I1006 08:42:25.884126 4769 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-fdzrg/must-gather-km72z" podUID="bcda4cb1-625c-403f-b766-b03eb41ee198" containerName="copy" containerID="cri-o://aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0" gracePeriod=2 Oct 06 08:42:25 crc kubenswrapper[4769]: I1006 08:42:25.893612 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-fdzrg/must-gather-km72z"] Oct 06 08:42:26 crc kubenswrapper[4769]: E1006 08:42:26.001173 4769 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcda4cb1_625c_403f_b766_b03eb41ee198.slice/crio-aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0.scope\": RecentStats: unable to find data in memory cache]" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.256585 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fdzrg_must-gather-km72z_bcda4cb1-625c-403f-b766-b03eb41ee198/copy/0.log" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.257677 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/must-gather-km72z" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.440531 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcda4cb1-625c-403f-b766-b03eb41ee198-must-gather-output\") pod \"bcda4cb1-625c-403f-b766-b03eb41ee198\" (UID: \"bcda4cb1-625c-403f-b766-b03eb41ee198\") " Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.440768 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmd5w\" (UniqueName: \"kubernetes.io/projected/bcda4cb1-625c-403f-b766-b03eb41ee198-kube-api-access-vmd5w\") pod \"bcda4cb1-625c-403f-b766-b03eb41ee198\" (UID: \"bcda4cb1-625c-403f-b766-b03eb41ee198\") " Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.448670 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcda4cb1-625c-403f-b766-b03eb41ee198-kube-api-access-vmd5w" (OuterVolumeSpecName: "kube-api-access-vmd5w") pod "bcda4cb1-625c-403f-b766-b03eb41ee198" (UID: "bcda4cb1-625c-403f-b766-b03eb41ee198"). InnerVolumeSpecName "kube-api-access-vmd5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.544286 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmd5w\" (UniqueName: \"kubernetes.io/projected/bcda4cb1-625c-403f-b766-b03eb41ee198-kube-api-access-vmd5w\") on node \"crc\" DevicePath \"\"" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.607316 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcda4cb1-625c-403f-b766-b03eb41ee198-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bcda4cb1-625c-403f-b766-b03eb41ee198" (UID: "bcda4cb1-625c-403f-b766-b03eb41ee198"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.646196 4769 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bcda4cb1-625c-403f-b766-b03eb41ee198-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.872537 4769 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-fdzrg_must-gather-km72z_bcda4cb1-625c-403f-b766-b03eb41ee198/copy/0.log" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.873686 4769 generic.go:334] "Generic (PLEG): container finished" podID="bcda4cb1-625c-403f-b766-b03eb41ee198" containerID="aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0" exitCode=143 Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.873756 4769 scope.go:117] "RemoveContainer" containerID="aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.873767 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fdzrg/must-gather-km72z" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.894987 4769 scope.go:117] "RemoveContainer" containerID="200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.971644 4769 scope.go:117] "RemoveContainer" containerID="aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0" Oct 06 08:42:26 crc kubenswrapper[4769]: E1006 08:42:26.972277 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0\": container with ID starting with aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0 not found: ID does not exist" containerID="aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.972399 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0"} err="failed to get container status \"aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0\": rpc error: code = NotFound desc = could not find container \"aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0\": container with ID starting with aa75bcf76015109d8177da1dc6087d0a784f2ede10d27e7317b2a0ddd949a9a0 not found: ID does not exist" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.972529 4769 scope.go:117] "RemoveContainer" containerID="200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794" Oct 06 08:42:26 crc kubenswrapper[4769]: E1006 08:42:26.973067 4769 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794\": container with ID starting with 200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794 not found: ID does not exist" containerID="200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794" Oct 06 08:42:26 crc kubenswrapper[4769]: I1006 08:42:26.973103 4769 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794"} err="failed to get container status \"200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794\": rpc error: code = NotFound desc = could not find container \"200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794\": container with ID starting with 200aef391b339f2a0c1af3b90b66e9cbb5852f5ec4777ae022ceac6f50759794 not found: ID does not exist" Oct 06 08:42:28 crc kubenswrapper[4769]: I1006 08:42:28.177981 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcda4cb1-625c-403f-b766-b03eb41ee198" path="/var/lib/kubelet/pods/bcda4cb1-625c-403f-b766-b03eb41ee198/volumes" Oct 06 08:42:35 crc kubenswrapper[4769]: I1006 08:42:35.165638 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:42:35 crc kubenswrapper[4769]: E1006 08:42:35.166375 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:42:47 crc kubenswrapper[4769]: I1006 08:42:47.166012 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:42:47 crc kubenswrapper[4769]: E1006 08:42:47.168860 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:43:00 crc kubenswrapper[4769]: I1006 08:43:00.166271 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:43:00 crc kubenswrapper[4769]: E1006 08:43:00.167171 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:43:11 crc kubenswrapper[4769]: I1006 08:43:11.167154 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:43:11 crc kubenswrapper[4769]: E1006 08:43:11.168357 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:43:25 crc kubenswrapper[4769]: I1006 08:43:25.168159 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:43:25 crc kubenswrapper[4769]: E1006 08:43:25.169706 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:43:38 crc kubenswrapper[4769]: I1006 08:43:38.169079 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:43:38 crc kubenswrapper[4769]: E1006 08:43:38.170950 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:43:50 crc kubenswrapper[4769]: I1006 08:43:50.166993 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:43:50 crc kubenswrapper[4769]: E1006 08:43:50.168080 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:44:01 crc kubenswrapper[4769]: I1006 08:44:01.166091 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:44:01 crc kubenswrapper[4769]: E1006 08:44:01.167199 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:44:14 crc kubenswrapper[4769]: I1006 08:44:14.171820 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:44:14 crc kubenswrapper[4769]: E1006 08:44:14.172954 4769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rlfqr_openshift-machine-config-operator(ff761ae3-3c80-40f1-9aff-ea1585a9199f)\"" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" podUID="ff761ae3-3c80-40f1-9aff-ea1585a9199f" Oct 06 08:44:29 crc kubenswrapper[4769]: I1006 08:44:29.166057 4769 scope.go:117] "RemoveContainer" containerID="c8acab26b8468454dda6d7e5cc0aff82fcb3cdec90b0ad8a921471248083d912" Oct 06 08:44:30 crc kubenswrapper[4769]: I1006 08:44:30.195235 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rlfqr" event={"ID":"ff761ae3-3c80-40f1-9aff-ea1585a9199f","Type":"ContainerStarted","Data":"2323a747b82947cf51cd865fec0cd96f2df690ec55a779d0d86c843f8d67d67e"} Oct 06 08:44:59 crc kubenswrapper[4769]: I1006 08:44:59.278467 4769 scope.go:117] "RemoveContainer" containerID="5939fcb7eb95a4013fa012a37677596ab63226e8642fff0b66a31a9c2b2913ae" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.181520 4769 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t"] Oct 06 08:45:00 crc kubenswrapper[4769]: E1006 08:45:00.182290 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c703ebe9-c23b-4f80-8718-f2d786295048" containerName="registry-server" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.182313 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c703ebe9-c23b-4f80-8718-f2d786295048" containerName="registry-server" Oct 06 08:45:00 crc kubenswrapper[4769]: E1006 08:45:00.182345 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e24fec-528e-460f-b951-fa14d8be6905" containerName="extract-utilities" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.182356 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e24fec-528e-460f-b951-fa14d8be6905" containerName="extract-utilities" Oct 06 08:45:00 crc kubenswrapper[4769]: E1006 08:45:00.182368 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcda4cb1-625c-403f-b766-b03eb41ee198" containerName="copy" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.182376 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcda4cb1-625c-403f-b766-b03eb41ee198" containerName="copy" Oct 06 08:45:00 crc kubenswrapper[4769]: E1006 08:45:00.182389 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c703ebe9-c23b-4f80-8718-f2d786295048" containerName="extract-content" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.182395 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c703ebe9-c23b-4f80-8718-f2d786295048" containerName="extract-content" Oct 06 08:45:00 crc kubenswrapper[4769]: E1006 08:45:00.182410 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e24fec-528e-460f-b951-fa14d8be6905" containerName="extract-content" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.182417 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e24fec-528e-460f-b951-fa14d8be6905" containerName="extract-content" Oct 06 08:45:00 crc kubenswrapper[4769]: E1006 08:45:00.182459 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcda4cb1-625c-403f-b766-b03eb41ee198" containerName="gather" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.182467 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcda4cb1-625c-403f-b766-b03eb41ee198" containerName="gather" Oct 06 08:45:00 crc kubenswrapper[4769]: E1006 08:45:00.182484 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37e24fec-528e-460f-b951-fa14d8be6905" containerName="registry-server" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.182492 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="37e24fec-528e-460f-b951-fa14d8be6905" containerName="registry-server" Oct 06 08:45:00 crc kubenswrapper[4769]: E1006 08:45:00.182505 4769 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c703ebe9-c23b-4f80-8718-f2d786295048" containerName="extract-utilities" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.182511 4769 state_mem.go:107] "Deleted CPUSet assignment" podUID="c703ebe9-c23b-4f80-8718-f2d786295048" containerName="extract-utilities" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.182720 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcda4cb1-625c-403f-b766-b03eb41ee198" containerName="copy" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.182742 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="c703ebe9-c23b-4f80-8718-f2d786295048" containerName="registry-server" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.182758 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcda4cb1-625c-403f-b766-b03eb41ee198" containerName="gather" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.182769 4769 memory_manager.go:354] "RemoveStaleState removing state" podUID="37e24fec-528e-460f-b951-fa14d8be6905" containerName="registry-server" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.183624 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.186650 4769 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.186922 4769 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.204757 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t"] Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.287085 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41b548eb-92ed-447d-a1ee-6b7ac514ce77-secret-volume\") pod \"collect-profiles-29329005-zsl7t\" (UID: \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.287134 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvdv6\" (UniqueName: \"kubernetes.io/projected/41b548eb-92ed-447d-a1ee-6b7ac514ce77-kube-api-access-bvdv6\") pod \"collect-profiles-29329005-zsl7t\" (UID: \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.287167 4769 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41b548eb-92ed-447d-a1ee-6b7ac514ce77-config-volume\") pod \"collect-profiles-29329005-zsl7t\" (UID: \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.389502 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41b548eb-92ed-447d-a1ee-6b7ac514ce77-secret-volume\") pod \"collect-profiles-29329005-zsl7t\" (UID: \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.390501 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvdv6\" (UniqueName: \"kubernetes.io/projected/41b548eb-92ed-447d-a1ee-6b7ac514ce77-kube-api-access-bvdv6\") pod \"collect-profiles-29329005-zsl7t\" (UID: \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.390556 4769 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41b548eb-92ed-447d-a1ee-6b7ac514ce77-config-volume\") pod \"collect-profiles-29329005-zsl7t\" (UID: \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.391922 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41b548eb-92ed-447d-a1ee-6b7ac514ce77-config-volume\") pod \"collect-profiles-29329005-zsl7t\" (UID: \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.398391 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41b548eb-92ed-447d-a1ee-6b7ac514ce77-secret-volume\") pod \"collect-profiles-29329005-zsl7t\" (UID: \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.408946 4769 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvdv6\" (UniqueName: \"kubernetes.io/projected/41b548eb-92ed-447d-a1ee-6b7ac514ce77-kube-api-access-bvdv6\") pod \"collect-profiles-29329005-zsl7t\" (UID: \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:00 crc kubenswrapper[4769]: I1006 08:45:00.510041 4769 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:01 crc kubenswrapper[4769]: I1006 08:45:01.012025 4769 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t"] Oct 06 08:45:01 crc kubenswrapper[4769]: I1006 08:45:01.582209 4769 generic.go:334] "Generic (PLEG): container finished" podID="41b548eb-92ed-447d-a1ee-6b7ac514ce77" containerID="36fee3e2d1875c0a4547eea68b99184bea90ef12e0fe3722c149d0f29a9f97af" exitCode=0 Oct 06 08:45:01 crc kubenswrapper[4769]: I1006 08:45:01.582357 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" event={"ID":"41b548eb-92ed-447d-a1ee-6b7ac514ce77","Type":"ContainerDied","Data":"36fee3e2d1875c0a4547eea68b99184bea90ef12e0fe3722c149d0f29a9f97af"} Oct 06 08:45:01 crc kubenswrapper[4769]: I1006 08:45:01.582755 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" event={"ID":"41b548eb-92ed-447d-a1ee-6b7ac514ce77","Type":"ContainerStarted","Data":"9fa4d19199735c8724283b7dc8183ac9b3262edf25145f03810abe371a8b5a3b"} Oct 06 08:45:02 crc kubenswrapper[4769]: I1006 08:45:02.970652 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:03 crc kubenswrapper[4769]: I1006 08:45:03.159466 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvdv6\" (UniqueName: \"kubernetes.io/projected/41b548eb-92ed-447d-a1ee-6b7ac514ce77-kube-api-access-bvdv6\") pod \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\" (UID: \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\") " Oct 06 08:45:03 crc kubenswrapper[4769]: I1006 08:45:03.159523 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41b548eb-92ed-447d-a1ee-6b7ac514ce77-config-volume\") pod \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\" (UID: \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\") " Oct 06 08:45:03 crc kubenswrapper[4769]: I1006 08:45:03.159644 4769 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41b548eb-92ed-447d-a1ee-6b7ac514ce77-secret-volume\") pod \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\" (UID: \"41b548eb-92ed-447d-a1ee-6b7ac514ce77\") " Oct 06 08:45:03 crc kubenswrapper[4769]: I1006 08:45:03.160511 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41b548eb-92ed-447d-a1ee-6b7ac514ce77-config-volume" (OuterVolumeSpecName: "config-volume") pod "41b548eb-92ed-447d-a1ee-6b7ac514ce77" (UID: "41b548eb-92ed-447d-a1ee-6b7ac514ce77"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 06 08:45:03 crc kubenswrapper[4769]: I1006 08:45:03.169227 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b548eb-92ed-447d-a1ee-6b7ac514ce77-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "41b548eb-92ed-447d-a1ee-6b7ac514ce77" (UID: "41b548eb-92ed-447d-a1ee-6b7ac514ce77"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 06 08:45:03 crc kubenswrapper[4769]: I1006 08:45:03.169533 4769 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b548eb-92ed-447d-a1ee-6b7ac514ce77-kube-api-access-bvdv6" (OuterVolumeSpecName: "kube-api-access-bvdv6") pod "41b548eb-92ed-447d-a1ee-6b7ac514ce77" (UID: "41b548eb-92ed-447d-a1ee-6b7ac514ce77"). InnerVolumeSpecName "kube-api-access-bvdv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 06 08:45:03 crc kubenswrapper[4769]: I1006 08:45:03.262775 4769 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/41b548eb-92ed-447d-a1ee-6b7ac514ce77-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 06 08:45:03 crc kubenswrapper[4769]: I1006 08:45:03.262814 4769 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvdv6\" (UniqueName: \"kubernetes.io/projected/41b548eb-92ed-447d-a1ee-6b7ac514ce77-kube-api-access-bvdv6\") on node \"crc\" DevicePath \"\"" Oct 06 08:45:03 crc kubenswrapper[4769]: I1006 08:45:03.262823 4769 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41b548eb-92ed-447d-a1ee-6b7ac514ce77-config-volume\") on node \"crc\" DevicePath \"\"" Oct 06 08:45:03 crc kubenswrapper[4769]: I1006 08:45:03.602675 4769 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" event={"ID":"41b548eb-92ed-447d-a1ee-6b7ac514ce77","Type":"ContainerDied","Data":"9fa4d19199735c8724283b7dc8183ac9b3262edf25145f03810abe371a8b5a3b"} Oct 06 08:45:03 crc kubenswrapper[4769]: I1006 08:45:03.602717 4769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fa4d19199735c8724283b7dc8183ac9b3262edf25145f03810abe371a8b5a3b" Oct 06 08:45:03 crc kubenswrapper[4769]: I1006 08:45:03.602738 4769 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29329005-zsl7t" Oct 06 08:45:04 crc kubenswrapper[4769]: I1006 08:45:04.060644 4769 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg"] Oct 06 08:45:04 crc kubenswrapper[4769]: I1006 08:45:04.068165 4769 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29328960-gpvzg"] Oct 06 08:45:04 crc kubenswrapper[4769]: I1006 08:45:04.177708 4769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46926e5d-b1b0-4779-aa80-c66e897833a0" path="/var/lib/kubelet/pods/46926e5d-b1b0-4779-aa80-c66e897833a0/volumes"